curid
stringlengths
2
7
title
stringlengths
1
182
text
stringlengths
2
230k
2372
ALF Tales
ALF Tales is a 30-minute Saturday morning animated series that aired on NBC from September 10, 1988, to December 9, 1989. The show is a spin-off of "" which featured characters from that series playing various characters from fairy tales. The fairy tale parody was usually altered for comedic effect in a manner akin to Jay Ward's "Fractured Fairy Tales". The episodes were performed in the style of a resident theater company or ensemble cast where Gordon and Rhonda would take the leading male and female roles, and the other characters were cast according to their characteristics. Many stories spoof a film genre, such as the "Cinderella" episode which is presented like an Elvis Presley film. Some episodes featured a "fourth wall" effect where Gordon is backstage preparing for the episode, and Rob Cowan would appear drawn as a TV executive (who introduced himself as "Roger Cowan, network executive") who tries to brief Gordon on how to improve this episode. For instance Cowan once told Gordon who was readying for a medieval themed episode that "less than 2% of our audience lives in the Dark Ages". Home media. The first seven episodes were released on DVD on May 30, 2006, in Region 1 from Lionsgate Home Entertainment in a single-disc release entitled "ALF and The Beanstalk and Other Classic Fairy Tales".
2376
Abdul Rashid Dostum
Abdul Rashid Dostum ( ; ; Uzbek Latin: , Uzbek Cyrillic: , ; born 25 March 1954) is an Afghan exiled politician, former Marshal in the Afghan National Army, founder and leader of the political party Junbish-e Milli. Dostum was a major army commander in the communist government during the Soviet–Afghan War, and in 2001 was the key indigenous ally to US Special Forces and the CIA during the campaign to topple the Taliban government. He is one of the most powerful warlords since the beginning of the Afghan wars, known for siding with winners during different wars. An ethnic Uzbek from a peasant family in Jawzjan province, Dostum joined the People's Democratic Party of Afghanistan (PDPA) as a teenager before enlisting in the Afghan National Army and training as a paratrooper, serving in his native region around Sheberghan. Soon with the start of the Soviet–Afghan War, Dostum commanded a KHAD militia and eventually gained a reputation, often defeating mujahideen commanders in northern Afghanistan and even persuading some to defect to the communist cause. Much of the country's north was in strong government control as a result. He achieved several promotions in the army and was honored as a "Hero of Afghanistan" by President Mohammed Najibullah in 1988. By this time he was commanding up to 45,000 troops in the region under his responsibility. Following the dissolution of the Soviet Union, Dostum played a central role in the collapse of Najibullah's government by "defecting" to the mujahideen; the division-sized loyal forces he commanded in the north became an independent paramilitary of his newly founded party called Junbish-e Milli. He allied with Ahmad Shah Massoud and together they captured Kabul, before another civil war loomed. Initially supporting the new government of Burhanuddin Rabbani, he switched sides in 1994 by allying with Gulbuddin Hekmatyar, but he backed Rabbani again by 1996. During this time he remained in control of the country's north which functioned as a relatively stable proto-state, but remained a loose partner of Massoud in the Northern Alliance. A year later, Mazar-i-Sharif was overrun by his former aide Abdul Malik Pahlawan, resulting in a battle in which he regained control. In 1998, the city was overrun by the Taliban and Dostum fled the country until returning to Afghanistan in 2001, joining the Northern Alliance forces after the US invasion and leading his loyal faction in the Fall of Mazar-i-Sharif. After the fall of the Taliban, he joined interim president Hamid Karzai's administration as Deputy Defense Minister and later served as chairman Joint Chiefs of Staff of the Afghan Army, a role often viewed as ceremonial. His militia feuded with forces loyal to general Atta Muhammad Nur. Dostum was a candidate in the 2004 elections, and was an ally of victorious Karzai in the 2009 elections. From 2011, he was part of the leadership council of the National Front of Afghanistan along with Ahmad Zia Massoud and Mohammad Mohaqiq. He served as Vice President of Afghanistan in Ashraf Ghani's administration from 2014 to 2020. In 2020 he promoted to marshal rank in after a political agreement between Ghani and former Chief Executive Abdullah Abdullah. Dostum is a controversial figure in Afghanistan. He is seen as a capable and fierce military leader and remains wildly popular among the Uzbek community in the country; Many of his supporters call him "Pasha" (پاشا), an honorable Uzbek/Turkic term. However he has also been widely accused of committing atrocities and war crimes, most notoriously the suffocation of up to 1,000 Taliban fighters in the Dasht-i-Leili massacre and he was widely feared among the populace. In 2018, the International Criminal Court (ICC) was reported to be considering launching an inquiry into whether Dostum had engaged in war crimes in Afghanistan. Early life. Dostum was born in 1954 in Khwaja Du Koh near Sheberghan in Jowzjan province, Afghanistan. Coming from an impoverished ethnic Uzbek family, he received a very basic traditional education as he was forced to drop out of school at a young age. From there, he took up work in the village's major gas fields. Career. Dostum began working in 1970 in a state-owned gas refinery in Sheberghan, participating in union politics, as the new government started to arm the staff of the workers in the oil and gas refineries. The reason for this was to create "groups for the defense of the revolution". Because of the new communist ideas entering Afghanistan in the 1970s, he enlisted in the Afghan National Army in 1978. Dostum received his basic military training in Jalalabad. His squadron was deployed in the rural areas around Sheberghan, under the auspices of the Ministry of National Security. As a Parcham member of the People's Democratic Party of Afghanistan (PDPA), he was exiled by the purge of the party's Khalqist leaders, living in Peshawar, Pakistan for a while. After the Soviet invasion (Operation Storm-333) and installation of Babrak Karmal as head of state, Dostum returned to Afghanistan where he started commanding a local pro-government militia in his native Jawzjan Province. Soviet–Afghan War. By the mid-1980s, he commanded around 20,000 militia men and controlled the northern provinces of Afghanistan. While the unit recruited throughout Jowzjan and had a relatively broad base, many of its early troops and commanders came from Dostum's home village. He left the army after the purge of Parchamites, but returned after the Soviet occupation began. During the Soviet–Afghan War, Dostum was commanding a militia battalion to fight and rout mujahideen forces; he had been appointed an officer due to prior military experience. This eventually became a regiment and later became incorporated into the defense forces as the 53rd Infantry Division. Dostum and his new division reported directly to President Mohammad Najibullah. Later on he became the commander of the military unit 374 in Jowzjan. He defended the Soviet-backed Afghan government against the mujahideen forces throughout the 1980s. While he was only a regional commander, he had largely raised his forces by himself. The Jowzjani militia Dostum controlled was one of the few in the country which was able to be deployed outside its own region. They were deployed in Kandahar in 1988 when Soviet forces were withdrawing from Afghanistan. Due to his efforts in the army, Dostum was awarded the title "Hero of the Republic of Afghanistan" by President Najibullah. Civil war and northern Afghanistan autonomous state. Dostum's men would become an important force in the fall of Kabul in 1992, with Dostum deciding to defect from Najibullah and allying himself with opposition commanders Ahmad Shah Massoud and Sayed Jafar Naderi, the head of the Isma'ili community, and together they captured the capital city. With the help of fellow defectors Mohammad Nabi Azimi and Abdul Wakil, his forces entered Kabul by air in the afternoon of 14 April. He and Massoud fought in a coalition against Gulbuddin Hekmatyar. Massoud and Dostum's forces joined to defend Kabul against Hekmatyar. Some 4,000–5,000 of his troops, units of his Sheberghan-based 53rd Division and Balkh-based Guards Division, garrisoning Bala Hissar fort, Maranjan Hill and Khwaja Rawash Airport, where they stopped Najibullah from entering to flee. Dostum then left Kabul for his northern stronghold Mazar-i-Sharif, where he ruled, in effect, an independent region (or 'proto-state'), often referred as the Northern Autonomous Zone. He printed his own Afghan currency, ran a small airline named Balkh Air, and formed relations with countries like Uzbekistan effectively creating his own proto-state with an army of up to 40,000 men, and with tanks supplied by Uzbekistan and Russia. While the rest of the country was in chaos, his region remained prosperous and functional, and it won him the support from people of all ethnic groups. Many people fled to his territory to escape the violence and fundamentalism imposed by the Taliban later on. In 1994, Dostum allied himself with Gulbuddin Hekmatyar against the government of Burhanuddin Rabbani and Ahmad Shah Massoud, but in 1995 sided with the government again. Taliban era. Following the rise of the Taliban and their capture of Kabul, Dostum aligned himself with the Northern Alliance (United Front) against the Taliban. The Northern Alliance was assembled in late 1996 by Dostum, Massoud and Karim Khalili against the Taliban. At this point he is said to have had a force of some 50,000 men supported by both aircraft and tanks. Much like other Northern Alliance leaders, Dostum also faced infighting within his group and was later forced to surrender his power to General Abdul Malik Pahlawan. Malik entered into secret negotiations with the Taliban, who promised to respect his authority over much of northern Afghanistan, in exchange for the apprehension of Ismail Khan, one of their enemies. Accordingly, on 25 May 1997, Malik arrested Khan, handed him over and let the Taliban enter Mazar-e-Sharif, giving them control over most of northern Afghanistan. Because of this, Dostum was forced to flee to Turkey. However, Malik soon realized that the Taliban were not sincere with their promises as he saw his men being disarmed. He then rejoined the Northern Alliance, and turned against his erstwhile allies, driving them from Mazar-e-Sharif. In October 1997, Dostum returned from exile and retook charge. After Dostum briefly regained control of Mazar-e-Sharif, the Taliban returned in 1998 and he again fled to Turkey. Operation Enduring Freedom. Dostum returned to Afghanistan in May 2001 to open up a new front before the U.S.-led campaign against the Taliban joined him, along with Commander Massoud, Ismail Khan and Mohammad Mohaqiq. On 17 October 2001, the CIA's eight-man Team Alpha, including Johnny Micheal Spann landed in the Dar-e-Suf to link up with Dostum. Three days later, the 12 members of Operational Detachment Alpha (ODA) 595 landed to join forces with Dostum and Team Alpha. Dostum, the Tajik commander Atta Muhammad Nur and their American allies defeated Taliban forces and recaptured Mazar-i-Sharif on 10 November 2001. On 24 November 2001, 15,000 Taliban soldiers were due to surrender after the Siege of Kunduz to American and Northern Alliance forces. Instead, 400 Al-Qaeda prisoners arrived just outside Mazar-i-Sharif. After they surrendered to Dostum, they were transferred to the 19th century garrison fortress, Qala-i-Jangi. The next day, while being questioned by CIA officers Spann and David Tyson, they used concealed weapons to revolt, triggering what became the Battle of Qala-i-Jangi against the guards. The uprising was finally brought under control after six days. Dasht-i-Leili massacre. Dostum has been accused by Western journalists of responsibility for the suffocating or otherwise killing of Taliban prisoners in December 2001, with the number of victims estimated as 2,000. In 2009, Dostum denied the accusations and US President Obama ordered an investigation into the massacre. Karzai administration. In the aftermath of Taliban's removal from northern Afghanistan, forces loyal to Dostum frequently clashed with Tajik forces loyal to Atta Muhammad Nur. Atta's men kidnapped and killed a number of Dostum's men, and constantly agitated to gain control of Mazar-e-Sharif. Through the political mediations of the Karzai administration, the International Security Assistance Force (ISAF) and the United Nations, the Dostum-Atta feud gradually declined, leading to their alignment in a new political party. Dostum served as deputy defense minister the early period of the Karzai administration. On 20 May 2003, Dostum narrowly escaped an assassination attempt. He was often residing outside Afghanistan, mainly in Turkey. In February 2008, he was suspended after the apparent kidnapping and torture of a political rival. Time in Turkey. Some media reports in 2008 stated earlier that Dostum was "seeking political asylum" in Turkey while others said he was exiled. One Turkish media outlet said Dostum was visiting after flying there with then Turkey's Foreign Minister Ali Babacan during a meeting of the Organization for Security and Cooperation in Europe (OSCE). On 16 August 2009, Dostum was asked to return from exile to Afghanistan to support President Hamid Karzai in his bid for re-election. He later flew by helicopter to his northern stronghold of Sheberghan, where he was greeted by thousands of his supporters in the local stadium. He subsequently made overtures to the United States, promising he could "destroy the Taliban and al Qaeda" if supported by the U.S., saying that "the U.S. needs strong friends like Dostum." Ghani administration. On 7 October 2013, the day after filing his nomination for the 2014 general elections as running mate of Ashraf Ghani, Dostum issued a press statement that some news media were willing to welcome as "apologies": "Many mistakes were made during the civil war (…) It is time we apologize to the Afghan people who were sacrificed due to our negative policies (…) I apologize to the people who suffered from the violence and civil war (…)". Dostum was directly chosen as First Vice President of Afghanistan in the April–June 2014 Afghan presidential election, next to Ashraf Ghani as president and Sarwar Danish as second vice president. In July 2016, Human Rights Watch accused Abdul Rashid Dostum's National Islamic Movement of Afghanistan of killing, abusing and looting civilians in the northern Faryab Province during June. Militia forces loyal to Dostum stated that the civilians they targeted – at least 13 killed and 32 wounded – were supporters of the Taliban. In November 2016, at a buzkashi match, he punched his political rival Ahmad Ischi, and then his bodyguards beat Ischi. In 2017, he was accused of having Ischi kidnapped in that incident and raped with a gun on camera during a five-day detention, claims that Dostum denies but that nevertheless forced him into exile in Turkey. On 26 July 2018, he narrowly escaped a suicide bombing by ISIL-KP as he returned to Afghanistan at Kabul airport. Just after Dostum's convoy departed the airport, an attacker armed with a suicide vest bombed a crowd of several hundred people celebrating his return at the entrance to the airport. The attack killed 14 and injured 50, including civilians and armed security. On 30 March 2019, Dostum again escaped an expected assassination attempt while traveling from Mazar-e-Sharif to Jawzjan Province, though two of his bodyguards were killed. The Taliban claimed responsibility for the attack, the second in eight months. On 11 August 2021 during the Taliban's nationwide offensive, Dostum, along with Atta Muhammad Nur, led the government's defence of the city of Mazar-i-Sharif. Three days later, they fled across Hairatan to Uzbekistan. Atta Nur claimed that they were forced to flee due to a "conspiracy". Both men later pled allegiance to the National Resistance Front of Afghanistan, the remaining remnants of the collapsed Islamic Republic of Afghanistan. Dostum, Atta, Yunus Qanuni, Abdul Rasul Sayyaf and some other political figures formed the "Supreme Council of National Resistance of the Islamic Republic of Afghanistan" in opposition to the new Taliban regime in October 2021. Political and social views. Dostum is considered to be liberal and somewhat leftist. Being ethnic Uzbek, he has worked on the battlefield with leaders from all other major ethnic groups, Hazaras, Tajiks and Pashtuns. When Dostum was ruling his northern Afghanistan proto-state before the Taliban took over in 1998, women were able to go about unveiled, girls were allowed to go to school and study at the University of Balkh, cinemas showed Indian films, music played on television, and Russian vodka and German beer were openly available: activities which were all banned by the Taliban. He viewed the ISAF forces attempt to crush the Taliban as ineffective and has gone on record saying in 2007 that he could mop up the Taliban "in six months" if allowed to raise a 10,000 strong army of Afghan veterans. As of 2007, senior Afghan government officials did not trust Dostum as they were concerned that he might be secretly rearming his forces. Personal life. Dostum is more than tall and has been described as "beefy". He generally favors to wear a camouflage Soviet style military uniform, and has had a trademark bushy moustache. Dostum was married to a woman named Khadija. According to Brian Glyn Williams, Khadija had an accidental death in the 1990s which broke Dostum as he "really loved his wife". Dostum eventually remarried after Khadija's death. He named one of his sons Mustafa Kamal, after the founder of the modern Turkish republic, Mustafa Kemal Ataturk. Dostum has spent a considerable amount of time in Turkey, and some of his family reside there. Dostum is known to drink alcohol, a rarity in Afghanistan, and apparently a fan of Russian vodka. He reportedly suffered from diabetes. In 2014 when he became vice president, Dostum reportedly gave up drinking for healthy meals and morning jogs.
2377
Andhra Pradesh
Andhra Pradesh (, abbr. AP) is a state in the southern coastal region of India. It is the seventh-largest state with an area of and tenth-most-populous state, with 49,577,103 inhabitants. It has shared borders with Telangana, Chhattisgarh, Odisha, Tamil Nadu, Karnataka, and the Bay of Bengal. It has the second-longest coastline in India of about . After existence as Andhra state and unified Andhra Pradesh, the state took its present form on 2 June 2014, when the new state Telangana was formed through bifurcation. Amaravati serves as the capital of the state with the largest city being Visakhapatnam. Water sharing disputes and assets division with Telangana are not yet resolved. Telugu, one of the classical languages of India used by majority of people, is an official language. As per the 8th century BCE Rigvedic text Aitareya Brahmana, the Andhras left North India from the banks of the Yamuna river and migrated to South India. In the third century BCE, Andhra was a vassal kingdom of Ashoka of Mauryan Empire. After his death, it became powerful and extended its empire to the whole of Maratha country and beyond under the rule of Satavahana dynasty. After that, the major rulers included Pallavas, Eastern Chalukyas, Kakatiyas, Vijayanagara empire, Qutb Shahi dynasty, Nizam dynasty, East India Company, and British Raj. The Eastern Ghats are a major dividing line separating coastal plains and peneplains. The coastal plains are part of Coastal Andhra. These mostly are delta regions formed by the Krishna, Godavari, and Penna rivers. Peneplains are part of Rayaaseema. 60% of the population is engaged in agriculture and related activities. Rice is the major food crop and staple food of the state. The state contributes 10% of total fish and over 70% of the shrimp production of India. Industry sectors such as food products, non-metallic minerals, textiles and pharmaceuticals are the top employement providers. Automotive sector accounts for 10% of auto exports of India. The state has about one-third of India's limestone reserves, large deposits of baryte and galaxy granite, apart from reserves of oil and natural gas. Satish Dhawan Space Centre(SDSC), known as Sriharikota Range (SHAR), at barrier island of Sriharikota in Tirupati district is the satellite launching station of India. Some of the unique products from the state are Banaganapalle mangoes, Bandar laddu, Kondaplli toys, Tirupati laddu, and saris made in Dharmavaram and Machilipatnam. Kuchipudi is the official dance form. Many composers of Carnatic music like Annamacharya, Kshetrayya, Tyagaraja, and Bhadrachala Ramadas were from this region. The Tirumala Venkateswara temple near Tirupati is the most visited Hindu religious place in the world. The state is home to a variety of other pilgrimage centres and natural attractions. History. Toponym. According to Sanskrit text Aitareya Brahmana (800–500 BCE), a group of people named Andhras left North India from banks of River Yamuna and settled in South India. The Satavahanas were mentioned by names "Andhra", "Andhrara-jateeya", and "Andhrabhrtya" in the Puranic literature. They did not refer to themselves as "Andhra" in any of their coins or inscriptions; it is possible that they were termed as Andhras because of their ethnicity or because their territory included the Andhra region. Early and medieval history. The Assaka "mahajanapada", one of the sixteen Vedic "mahajanapadas", included Andhra, Maharashtra and Telangana. Archaeological evidence from places such as Bhattiprolu, Amaravati, Dharanikota, and Vaddamanu suggests that the Andhra region was part of the Mauryan empire. Amaravati might have been a regional centre for the Mauryan rule. After the death of Emperor Ashoka, Mauryan rule weakened around 200 BCE and was replaced by several smaller kingdoms in the Andhra region. One of the earliest evidence of Brahmi script, the progenitor of several scripts including Telugu comes from Bhattiprolu where the script was used on an urn containing the relics of Buddha. The Satavahana dynasty dominated the Deccan plateau from the 1st century BCE to the 3rd century CE. It had trade relations with Roman empire. The later Satavahanas made Dharanikota near Amaravati their capital. According to the Buddhists, Nagarjuna, the philosopher of Mahayana lived in this region. The Andhra Ikshvakus, with their capital at Vijayapuri, succeeded the Satavahanas in the Krishna River valley in the latter half of the 2nd century. The Salankayanas were an ancient dynasty that ruled the Andhra region between Godavari and Krishna with their capital at Vengi (modern Pedavegi) from 300 to 440 CE. Telugu Cholas ruled present day Rayalaseema from the fifth to the eleventh centuries from Cuddapa and Jammalamadugu. The Telugu inscription of Erikal Mutturaju Dhananjaya Varma, known as Erragudipadu , was engraved in the 575 A.D. in the present Kadapa district. It is the earliest written record in Telugu language. The Vishnukundinas were the first dynasty in the fifth and sixth centuries, that held sway over most of Andhra, Kalinga,and parts of Telangana. The Eastern Chalukyas of Vengi, whose dynasty lasted for around five hundred years from the 7th century until 1130 CE, eventually merged with the Chola dynasty. They continued to rule under the protection of the Chola dynasty until 1189 CE At the request of King Rajaraja Narendra, Nannaya considered as the first Telugu poet, took up the translation of Mahabharata into Telugu in 1025 CE. Kakatiyas ruled this region and Telangana for nearly two hundred years between 12th and 14th centuries. They were defeated by Delhi Sultanate. Musunuri Nayaks and Bahamani sultanate took over when Delhi Sultanate became weak. The Reddi kingdom ruled parts of this region in the early 14th century. They constructed Kondaveedu fort and Kondapalli forts. After their rule, Gajpathis and Bahmani sultans ruled this region in succession before this region along with most of the present day Andhra Pradesh became part of the Vijayanagar empire. The Vijayanagara empire originated in the Deccan plateau in the early 14th century. It was established in 1336 by Harihara Raya I and his brother Bukka Raya I of the Sangama dynasty who served as treasury officers of the Kakatiyas of Warangal. During their rule, the Pemmasani Nayaks controlled parts of Andhra Pradesh and had large mercenary armies that were the vanguard of the empire in the 16th century. The empire's patronage enabled fine arts and literature to reach new heights in Kannada, Telugu, Tamil, and Sanskrit, while Carnatic music evolved into its current form. The Lepakshi group of monuments built during this period have mural paintings of the Vijayanagara kings, Dravidian art, and inscriptions. These are put on the tentative list of UNESCO world heritage committee. Modern history. Following the defeat of Vijayanagara empire, the Qutb Shahi dynasty held sway over the Andhra country. This region passed into rule of Nizams under Mughal empire. Soon Nizam established himself as the sovereign ruler. In 1611, English trading post by the name "East India Company" was established in Masulipatinam on India's east coast. In the early nineteenth century, Northern Circars was ceded to the British East India Company and became part of its Madras Presidency. Eventually, this region emerged as the Coastal Andhra region, northern parts of which were later known as Uttarandhra. Later the Nizam ceded five territories to the British that eventually became the Rayalaseema region. The local chieftains known as Poligars revolted in 1800 against the company rule, which was suppressed by the company. Raja Viziaram Raz (Vijayaram Raj) established a sovereign kingdom by claiming independence from the Kingdom of Jeypore in 1711. It formed alliances with the French and British East India Company to conquer the neighbouring principalities of Bobbili, Kurupam, Paralakhemundi, and the kingdom of Jeypore. It fell out with the British and as a result was attacked and defeated in the battle of Padmanabham. It was annexed as a tributary estate like other principalities and remained so until their accession to the Indian Union in 1949. Following the Indian rebellion of 1857, British crown ruled this region, till India became independent in 1947. No Tax campaign in Chirala and Perala in 1919 led by Duggirala Gopalakrishnayya, Rampa Revolt led by Alluri Sitarama Raju in 1921, Salt satyagraha in Dendulur in 1930 are some of the protests against the British rule. Tanguturi Prakasam was arrested and jailed for more than three years for participating in the Quit India movement of 1942. He served as prime minister of Madras presidency in 1946-47. Dowleswaram Barrage, built in 1850 by Arthur Cotton brought unused lands in the Godavari river basin into cultivation and transformed the economy of the region. Charles Philip Brown did pioneering work in transforming Telugu to the print era and introduced Vemana poems to English readers. Kandukuri Veeresalingam is considered as the father of the Telugu renaissance movement as he encouraged the education of women and the remarriage of widows and fought against child marriage and the dowry system. Gurajada Apparao, a pioneering playwright who used spoken dialect wrote the play Kanyasulkam in 1892. It is considered as the greatest play in the Telugu language. Post-independence. In an effort to gain an independent state based on linguistic identity and to protect the interests of the Telugu-speaking people of Madras state, Potti Sreeramulu fasted to death in 1952. As Madras city became a bone of contention, in 1949 a committee with Jawaharlal Nehru, Vallabhbhai Patel, and Pattabhi Sitaramayya was constituted. The committee recommended that Andhra province could be formed provided the Andhras gave up their claim on the city of Madras [now Chennai]. After Potti Sreeramulu's death, the Telugu-speaking area of Andhra state was carved out of Madras state on 1 October 1953, with Kurnool as its capital city. Tanguturi Prakasam became the first chief minister. On the basis of the Gentlemen's agreement of 1956, the States reorganisation act created Andhra Pradesh by merging the neighbouring Telugu-speaking areas of the Hyderabad State with Hyderabad as the capital on 1 November 1956. The Indian National Congress (INC), ruled the state from 1956 to 1982. Neelam Sanjiva Reddy became the first chief minister. Among other chief ministers, P. V. Narasimha Rao is known for implementing land reforms and land ceiling acts and securing reservation for lower castes in politics. Nagarjuna Sagar Dam completed in 1967 and Srisailam Dam completed in 1981 are some of the irrigation projects, which helped in increased production of paddy in the state. In 1983, the Telugu Desam Party (TDP) won the state elections and N. T. Rama Rao became the chief minister of the state for the first time after launching his party just nine months earlier. This broke the long-time single party monopoly enjoyed by the INC. He transformed the sub-district administration by forming mandals in place of earlier taluks and removing hereditary village heads and appointing non-hereditary village revenue assistants. The 1989 elections ended the rule of Rao, with the INC returning to power with Marri Chenna Reddy at the helm. In 1994, Andhra Pradesh gave a mandate to the Telugu Desam Party again and Rao became the chief minister again. Nara Chandrababu Naidu, Rao's son-in-law, came to power in 1995 with the backing of a majority of the MLAs. The Telugu Desam Party won both the assembly and Lok Sabha election in 1999 under the leadership of Chandrababu Naidu. Thus Naidu held the record for the longest-serving chief minister (1995 to 2004) of united Andhra Pradesh. He introduced e-governance by launching "e-Seva" centers in 2001 for paperless and speedy delivery of government services. He is credited with transforming Hyderabad into an IT hub, by providing incentives for tech companies to setup centers. In 2004, Congress returned to power with a new chief ministerial face, YS Rajashekara Reddy, better known as YSR. The main emphasis during Reddy's tenure was on social welfare schemes such as free electricity for farmers, health insurance, tuition fee reimbursement for poor, and national rural employment guarantee scheme. He took over the free emergency ambulance service initiated by a corporate and ran it as a government project. INC won the 2009 elections under the leadership of YSR in April. He was elected chief minister again, but was killed in a helicopter crash that occurred in September 2009. He was succeeded by Congressmen Konijeti Rosaiah and Nallari Kiran Kumar Reddy, the latter resigned over the impending division of state to form Telangana. During 58 years as unified state, the state weathered separatist movements from Telangana (1969) and Andhra (1972) successfully. A new party called Telangana Rashtra Samithi was formed in April 2001 by Kalvakuntla Chandrashekar Rao (KCR) reignited the Telanganga movement. Joint action committee formed with political parties, government employees, and general public spearheaded the agitation. When KCR 's health deterirorated by his fast unto death program, the central government decided to initiate the process to form independent Telangana in December 2009. This triggered Samaikyandhra movement to keep the state united. Srikrishna committee was formed to give recommendations to deal with the situation. It gave its report in December 2010 The agitations continued for nearly 5 years with Telangana side harping on marginalisation of food culture, language, and an unequal economic development and Samaikyandhra movement focusing on shared culture, language, customs and historical unity of Telugu speaking regions. The Andhra Pradesh reorganisation act bill was passed by the parliament of India for the formation of the Telangana state comprising ten districts, despite opposition by the state legislature. The bill included the provision to retain Hyderabad as capital for up to ten years and included the provision to ensure access to the educational institutions for the same period. The bill received the assent of the president and was published in the gazette on 1 March 2014. The new state of Telangana came into existence on 2 June 2014 after approval from the president of India, with the residual state continuing as Andhra Pradesh. The present form of Andhra Pradesh is same as Andhra state, except Bhadrachalam town which continues in Telangana. Number of petitions questioning the validity of Andhra Pradesh reorganisation act, 2014 are pending before the Supreme Court constitutional bench since April 2014. In the final elections held in the unified state in 2014, the TDP got a mandate in its favour, defeating its nearest rival YSR Congress Party, a break away faction of the Congress founded by Y. S. Jagan Mohan Reddy, son of former Chief minister Y. S. Rajasekhara Reddy. N. Chandrababu Naidu, the chief of the TDP became the chief minister on 8 June 2014. In 2017, Government of Andhra Pradesh began operating from its new greenfield capital Amaravati for which 33,000 acres were acquired from farmers through an innovative land pooling scheme. Interstate issues with Telangana relating to division of assets of public sector institutions and organisations of the united state and division of river waters are not yet resolved. Geography. The state is bordered by Telangana to the north and west, Chhattisgarh, Orissa to the north, the Bay of Bengal to the east, Tamil Nadu to the south and Karnataka to the west. Yanam district, an enclave of Puducherry is in the state bordering the Kakinada district. It has a coastline of around , which gives it the second longest coastline in the nation. The Eastern Ghats are a major dividing line separating coastal plains and peneplains in the state geography. Eastern Coastal Plains comprise the area of coastal districts upto Eastern ghats as their border along the Bay of Bengal with variable width. These are for the most part, delta regions formed by the Krishna, Godavari, and Penna rivers. Most of the coastal plains are put to intense agricultural use. The Eastern Ghats are discontinuous and individual sections have local names. The ghats become more pronounced towards the south and extreme north of the coast. These consist of the Papikonda range, the Simhachal hill range, Yarada hills, Nallamala Hills, Papi hills, Seshachala hills and Horsley hills. The Kadapa Basin formed by two arching branches of the Eastern Ghats is a mineral-rich area. Peneplains, part of Rayalaseema slope towards the east with the Eastern Ghats as their eastern border. Flora and fauna. The total forest cover of the state is amounting to 18.28% of the total area. Eastern Ghats region is home to dense tropical forests, while the vegetation becomes sparse as the ghats give way to the peneplains, where shrub vegetation is more common. The vegetation found in the state is largely of dry deciduous types with a mixture of teak, "Terminalia", "Dalbergia", "Pterocarpus", "Anogeissus" etc. The state possesses some rare and endemic plants like "Cycas beddomei", "Pterocarpus santalinus", "Terminalia pallida", "Syzygium alternifolium", "Shorea talura", "Shorea tumburgia" "Psilotum nudum" etc. Coringa is an example for Mangrove forests, salt – tolerant forest ecosystems near the sea. The area of these forests is accounting for about 9% of local forest area of the state. The diversity of fauna includes tigers, panthers, hyenas, black bucks, cheetals, sambars, sea turtles and a number of birds and reptiles. The estuaries of the Godavari and Krishna rivers support rich mangrove forests with fishing cats and otters as keystone species. The state has many sanctuaries, national parks such as Coringa, Nagarjunsagar-Srisailam Tiger Reserve, Kolleru Bird Sanctuary, and Nelapattu Bird Sanctuary. Mineral resources. The state with varied geological formations, contains variety of industrial minerals and building stones. It is listed at the top in the deposits of mica in India. Minerals found in the state include limestone, manganese, asbestos, iron ore, ball clay, fire clay, gold, diamonds, graphite, dolomite, quartz, tungsten, steatitic, feldspar, and silica sand. It has reserves of oil and natural gas. It has about one-third of India's limestone reserves and is known for large exclusive deposits of baryte and galaxy granite. The largest reserves of uranium are in Tummalapalli village, Vemula mandal of YSR district. Climate. The climate varies considerably, depending on the geographical region. Summers last from March to June. In the coastal plain, the summer temperatures are generally higher than the rest of the state, with temperature ranging between . July to September is the season for tropical rains from Southwest monsoon. During October to December, low-pressure systems and tropical cyclones form in the Bay of Bengal along with the Northeast monsoon, bring rains to the southern and coastal regions of the state. November to February are the winter months. Since the state has a long coastal belt the winters are not very cold. The range of winter temperature is generally . Lambasingi in Visakhapatnam district is nicknamed as the "Kashmir of Andhra Pradesh" as its temperature ranges from . The normal rainfall for the state is and the actual rainfall for Jun 2020–May 2021 was . Demographics. Based on the 2011 Census of India, population of Andhra Pradesh is 49,577,103 with a density of . The total population consists of 70.53% of rural population and 29.47% of urban population. The state has 17.08% Scheduled Caste and 5.53% of Scheduled Tribe population. Children in the age group of 0–6 years are 5,222,384, constituting 10.6% of the total population. Among them 2,686,453 are boys and 2,535,931 are girls. Adults in the age group of 18-23 account for 5,815,865 (2,921,284 males, 2,894,581 females). The state has sex ratio of 997 females per 1000 males, higher than the national average of 926 per 1000. The literacy rate of the state stands at 67.35%. Erstwhile West Godavari district has the highest literacy rate of 74.32% and erstwhile Vizianagaram district has the least with 58.89%. The state ranks 27th of all Indian states in the Human development index (HDI) scores for the year 2018. , there are 39,984,868 voters (19,759,489 male, 20,221,455 female and 3,924 third gender). Kurnool district has maximum number of voters at 1,942,233, while ASR district has minimum at 729,085. Languages. Telugu and Urdu are official languages of the state. Telugu is the mother tongue of nearly 90% of the population. Rajahmundry is the cultural capital of Andhra Pradesh as Telugu language has roots from this region. Tamil, Kannada, and Odia are spoken in the border-areas. Lambadi, Koya, Savara, Konda, Gadaba, and a number of other languages are spoken by the Scheduled Tribes of the state. 19% of the population aged 12+ years have the ability to read and understand English as per IRS Q4, 2019 survey. Religion. According to the 2011 census, the major religious groups in the state are Hindus (90.89%), Muslims (7.30%) and Christians (1.38%). Family health. National Family Health Survey (NFHS-5) 2019-21 data for Andhra Pradesh reveals several important indicators of family health. 85% of households in the state have pucca houses. 76% of households (59% of urban, 83% of rural) own a house. Almost all houses have electricity connection. 84% households use a clean fuel for cooking. 22% have piped water. 85% of all households (urban areas 97%,rural areas 80%) have access to a toilet facility. Almost all urban households (96%) and most rural households (89%) use a mobile phone. 96% of households use bank or post office savings accounts. 97% of childbirths during 2014-2019 happened in a health facility. The state health insurance scheme (Dr YSR Arogya Sri), employee health scheme, the (RSBY), the employees' state insurance scheme (ESIS), and the central government health scheme cover 70% of households with at least one member covered. Administrative divisions. Andhra Pradesh comprises two regions namely Kostaandhra (Coastal Andhra) and Rayalaseema. Northern part of Coastal Andhra is sometimes mentioned separately as Uttaraandhra, particularly after the bifurcation to raise voice against under development. Districts. The state is further divided into 26 districts, with Uttarandhra comprising 6 districts, Kostaandhra comprising 12 districts and Rayalaseema comprising 8 districts. Uttaraandhra: Kostaandhra: Rayalaseema: Revenue divisions. These 26 districts are further divided into 76 revenue divisions. Mandals and village panchayats. The 76 revenue divisions are in turn divided into 679 mandals. There are 13,324 village panchayats in the state. Cities and towns. There are 123 urban local bodies, comprising 17 municipal corporations, 79 municipalities and 27 nagar panchyats in the state. The urban population is 149 million as per 2011 census. There are two cities with more than one million inhabitants, namely Visakhapatnam and Vijayawada.Guntur population likely to cross More than one million mark soon Economy. GSDP at current prices for the year 2022–23 is estimated at (advance estimates) against (first revised estimates) for the year 2021–22. Share of agriculture contribution to GSDP is at 36.19% while industry is at 23.36% and the services is at 40.45%. The state posted a record growth of 7.02% at constant prices (2011–12) against the country's growth of 7%. GDP per capita is estimated at . AP achieved overall 4th rank in Sustainable Development Goals (SDG) India Report for the year 2020–21, with first rank in SDG-7 (affordable energy) and second rank in SDG-14 (Life below water). In 2014–15, the first year after bifurcation, the state ranked eighth in GSDP at current prices, which stood at . It recorded 12.03% growth compared to previous fiscal which was . Agriculture. Agricultural economy comprises agriculture, livestock, poultry farming, and fisheries. Four important rivers of India, the Godavari, Krishna, Penna, and Tungabhadra flow through the state and provide irrigation. 60% of the population is engaged in agriculture and related activities. Rice is the major food crop and staple food of the state. The state has three agricultural export zones in the undivided Chittoor district for mango pulp and vegetables, the undivided Krishna district for mangoes, the undivided Guntur district for chilies. Besides rice, farmers grow jowar, bajra, maize, minor millet, many varieties of pulses, oil seeds, sugarcane, cotton, chili pepper, mango, and tobacco. Crops used for vegetable oil production such as sunflower, and peanuts are popular. The state contributes 10% of total fish and over 70% of the shrimp production of India. The geographical location of the state allows marine fishing as well as inland fish production. The most exported marine exports include "Vannamei shrimp". Industrial sector. As per annual survey of industries 2019–20, the number of factories was 12,582 with 681,224 employees. Top 4 employment providers are food products (25.48%), non-metallic Minerals (11.26%), textiles (9.35%) and pharmaceuticals (8.68%). Gross value added (GVA) contributed by industrial sector is of which food products (18.95%), pharmaceuticals (17.01%) and non-metallic minerals (16.25%) are the top 3 contributors. From a district perspective, top three districts were undivided Visakhapatnam, Chittoor, and Krishna. The defense administered Hindustan Shipyard Limited built the first ship of India in 1948. Sri City located in Tirupati district is an integrated business city which is home to several multi national companies. The state has 36 big auto players such as Ashok Leyland, Hero Motors, Isuzu Motors India, and Kia Motors, with investment of over US$2.8 billion. It accounts for 10% of auto exports of India. Industrial minerals, dimensional stones,building materials, and sand are the main minerals. Mining sector contributed revenue to the state during 2021–22. Ravva block, in the shallow offshore area of Krishna Godavari Basin, produced nearly 311 million barrels of crude oil and 385 billion cubic feet of natural gas starting from the initial production in March 1994. State accounts for 2.7% of crude oil production in India, with 827.8 thousand metric tonnes (TMT) from its Krishna Godavari basin. 809 million metric standard cubic metres (MMSCM) natural gas is produced from onshore sites, which accounts for 2.4% of India's production. Services. IT/ITES. The value of information technology exports from the state in 2021–22 was , which is 0.14% of the IT exports from India. Exports remained below 2% in the past five years. Travel and tourism. The state is ranked third in the domestic tourist footfalls for the year 2021, with 93.2 million domestic tourists, which amounts to 13.8% of all India domestic tourists. Major share of the tourists visit temples in Tirupati, Vijayawada, and Srisailam. Government and politics. The legislative assembly is the lower house of the state with 175 members and the legislative council is the upper house with 58 members. In the Parliament of India, the state has 11 seats in the Rajya Sabha and 25 seats in the Lok Sabha. There are a total of 175 assembly constituencies in the state. In the 2019 elections, Y. S. Jagan Mohan Reddy, leader of the YSR Congress Party became the chief minister with a resounding mandate by winning 151 out of 175 seats. Government revenue and expenditure. For 2021–22, total receipts of the Andhra Pradesh government were , inclusive of of loans. States' own tax revenue was . The top three sources of non tax revenue are state Goods and services tax (GST) (), sales tax / value added tax (VAT) (), state excise (). Government earned a revenue of from 2.574 million transactions for registration services. Visakhapatnam, Vijayawada, Guntur, and Tirupati cities are the top contributors to the revenue. The government's total expenditure was ₹1,91,594 crore, which includes debt repayment of ₹13,920 crore. Fiscal deficit was ₹25,013 crore, which was 2.1% of GSDP. Revenue expenditure was ₹1,59,163 crore and capital expenditure was ₹16,373 crore. Welfare expenditure got the maximum share. Education accounted for ₹25,796 crore, energy ₹10,852 crore, and irrigation ₹7,027 crore. Outstanding debt was ₹3.89 lakh crore an increase of almost ₹40,000 crore compared to previous year. This accounts for 32.4 per cent of the GSDP. Outstanding guarantee estimate was ₹1,38,875 crore, of which ₹38,473 are for the power sector, which equals 12% of GSDP. Amaravati protests. In August 2020, Andhra Pradesh legislative assembly passed Andhra Pradesh decentralisation and inclusive development of all regions act. It provided for limiting Amaravati as legislative capital, while naming Vizag as executive capital and Kurnool as judicial capital. The events leading to this decision resulted in widespread continuing protests by the farmers of Amaravati. The act has been challenged in Andhra Pradesh High Court, which ordered to maintain status quo until the court completes its hearing. The government, led by Y. S. Jagan Mohan Reddy, withdrew the act, when the High Court hearing reached the final stage. The chief minister said that his government would bring a better and more complete bill. The protesters under the banner of Amaravati parirakshana samithi (APS) and joint action committee (JAC) of Amaravati received support from all the political parties barring the ruling YCP when they held their long marches across the state seeking support for their agitation. On 5 March 2022, High Court ruled that the government could not abandon development of Amaravati as capital city after farmers parted with 33,000 acres of land against the agreement with Andhra Pradesh Capital Region Development Authority (APCRDA) to develop it as the capital city and ₹15,000 crore was sunk in it over development expenditure. It asked the government to develop Amaravati within six months. When the government appealed in the Supreme Court, it got a stay on the judgement regarding developing the city within six months. Supreme Court posted the case to 11 July 2023 for hearing. Meanwhile, Jagan Mohan Reddy announced that Visakhapatnam would become the new capital when he addressed a meeting on 31 January 2023, relating to an upcoming investment summit. Interstate disputes. Assets division with Telangana. There are 91 institutions under schedule IX with assets of ₹1.42 lakh crore, 142 institutions under schedule X with assets of ₹24,018.53 crore, and another 12 institutions not mentioned in the act with assets of ₹1,759 crore, which are to be split between Andhra Pradesh and Telangana following the bifurcation. Expert committee headed by Sheela Bhide gave recommendation for bifurcation of 89 out of the 91 schedule IX institutions. Telangana selectively accepted the recommendations, while Andhra Pradesh is asking for the acceptance in total. The division of the RTC headquarters and the Deccan infrastructure and landholdings limited (DIL) with huge land parcels has become contentious. Despite several meetings of the trilateral dispute resolution committees, no progress was made. Andhra Pradesh government filed a suit in the Supreme Court. Krishna river water sharing dispute. Andhra Pradesh and Telangana continue to dispute water share of the Krishna river. In 1969, the Bachawat tribunal for allocation of water share among the riparian stats allocated 811 tmcft water to Andhra Pradesh. The Andhra Pradesh government of that time split it in 512:299 tmcft ratio between Andhra (including the basin area of Rayalaseema) and Telangana respectively. It was based on the utilisation facilities established at that time. Though tribunal recommended utilisation of the Tungabhadra Dam (a part of the Krishna Basin) water to the drought-prone Mahabubnagar area of Telangana, this was not implemented. The bifurcation act advised the formation of the Krishna river management board (KRMB) and the Godavari river management board (GRMB) for resolving the disputes between the new states. In 2015, the two states agreed to share water in the 66:34 (AP:Telangana) ratio as an interim arrangement in a meeting with central water ministry, which is to be reviewed every year. This practice continued without further review. Telangana state filed a suit in Supreme Court for 70% share. Following the assurance of formation of tribunal to resolve the issue, Telangana withdrew its suit. Center is yet to form the tribunal. Godavari water sharing dispute. The undivided Andhra Pradesh got 1172.78 tmcft of Godavari water. Telangana is utilising 433 tmcft for its completed projects, while Andhra Pradesh share is 739 tmcft. Andhra Pradesh government has opposed Telangana submitting detailed project report for additional utilisation through new/upgraded projects such as Kaleswaram, Tupakulagudem, Sitarama, Mukteswaram, and Modikunta lift irrigation projects. Five villages near Bhadrachalam. The 1.50 metre increase in the height of the Polavaram coffer dam to 44 metre raised the suspicion that it led to flooding of Bhadrachalam and nearby villages of Telangana along the Godavari river in 2022. Three mandals which were originally part of Andhra state were transferred back to Andhra Pradesh excluding Bhadrachalam town to support Polavaram project, as those areas are likely to be submerged. Telangana would like to take back five villages on the river banks for ease of movement of its government machinery to provide rehabilitation support to its other villages beyond them, which Andhra Pradesh government is objecting. Infrastructure. Transport. Roads. The state has a total major road network of . This comprises of national highways, of state highways and of major district roads. NH 16, with a highway network of around in the state, is a part of Golden Quadrilateral project undertaken by National Highways Development Project. The proposed Anantapuram - Amaravati Expressway is changed to Anantapur-Guntur national highway 544D with implementation expected to begin in January 2023. 1.828 million transport vehicles and 13.7 million non-transport vehicles are registered in the state. In the transport category, 0.98 million are goods carriages constituting 53.61%, 0.66 million are auto rickshaws constituting 36.21% and 0.109 million are cabs constituting 5.96%. In the non transport category, 12.2 million are motor cycles constituting 89.5%, 1.067 million are four wheelers constituting 7.29%. Integrated road accident database project, an initiative of Ministry of road transport and highways (MORTH) is under implementation in the state. Construction of Institute of driver training and research facilities at Darsi, Praksam district and Dhone, Nandyal district in partnership with Maruti Suzuki and Ashok Leyland respectively is in progress. Automation of driving test tracks in 9 district capitals is expected to be completed by 31 March 2023. The state government owned Andhra Pradesh State Road Transport Corporation (APSRTC) is the public bus transport provider. It is split into 129 depots across 4 zones. It has a fleet strength of 11,098 buses with a staff count of 49,544. It operates 1.11 billion km and serves 3.68 million passesngers daily. Pandit Nehru bus station (PNBS) in Vijayawada is second largest bus terminals in Asia. Railways. Andhra Pradesh has a total broad-gauge railway route of . The rail density of the state is 24.36 km per 1000 square kilometres. The railway network in Andhra Pradesh is under South Central Railway, East Coast Railway and South Western Railway zones. During 2014–2022, 350 km of new lines were constructed at the rate of 44 km per year in Andhra Pradesh under South Central Railway division. The rate of construction was only 2 km per year in the preceding five years. Nadikudi- Srikalahasti line of 308.70 km sanctioned at a budget of in 2011-12 as a joint project of center and state is progressing slowly with only phase-1 of 46 km between New Piduguralla station and Savalyapuram completed in 2021–22. There are three A1 and 23 A-category railway stations in the state as per the assessment in 2017. has been declared the cleanest railway station in the country as per the assessment in 2018. The railway station of Shimiliguda was the first highest broad gauge railway station in the country. A new railway zone South Coast Railway Zone (SCoR) with headquarters at Visakhapatnam was announced as the newest railway zone of the Indian Railways in 2019, but is yet to be implemented . Airports. Visakhapatnam airport, NTR Amaravati international airport, at Vijayawada, Tirupati airport are international airports in the state. The state has three domestic airports, namely Rajahmundry airport, Kadapa airport, and Kurnool airport. A privately owned airport for emergency flights and chartered flights is at Puttaparthi. Sea ports. The state has one major port at Visakhapatnam under the administrative control of central government and 15 notified ports inclusive of three captive ports under the control of state government. The other famous ports are Krishnapatnam port, Gangavaram port and Kakinada port. Gangavaram port is a deep seaport which can accommodate ocean liners up to 200,000–250,000 DWT. Communication. AP state wide area network (APSWAN) connects 2,164 offices of state administration at 668 locations down to the level of mandal headquarters. The network supports both data and video communications. Bharat sanchar nigam limited (BSNL) and National knowledge network (NKN) links district headquarters with state headquarters with bandwidth of 34Mbit/s. Mandal headquarters are connected with bandwidth of 8Mbit/s. Andhra Pradesh State FiberNet Limited (APSFNL) operates an optical fiber network. This provides internet connectivity, telephony and Internet protocol television (IPTV) with fiber to private and corporate users of Andhra Pradesh. Water. The state has 40 major and medium rivers and 40,000 minor irrigation sources. Godavari, Krishna and Pennar are the major rivers. The total cultivable area is 19.904 million acres. Major, Medium, and Minor irrigation projects irrigate 10.311M acres. Polavaram project under construction suffered setback with damage to its diaphragm wall during 2022 floods. Veligonda project is likely to be commissioned by September 2023. Annamayya project washed away in 2021 floods is set to be redesigned at a cost of 787 crores. Power. Thermal, hydel and renewable power plants supply power to the state. Installed capacity share of state in the public sector generating stations was 7,245MW. Private sector installed capacity was 9,370MW, which includes independent power producer capacity of 1,961MW. Total installed capacity was 16,615MW. Peak power demand of the state in 2021-22 was 12,032MW and per capita consumption was 1,285 kilo watt hour. Energy consumed is 68972 million units. Healthcare. The government is spending 7.3% of state budget on healthcare compared to the average of 4 to 4.5 per cent overall in the country. 108 service provides fast emergency management services in reaching the patient and shifting him to nearby healthcare facility. 104 service provides health care service at the doorstep of villages, through Mobile medical units that visit at least once a month. All the poor families are covered by the free state health insurance scheme called Arogyasri up to a limit of . The services are provided in government and private hospitals under the network. During 2014 - 2018, though the nominal mean claim amount of Arogyasri beneficiaries has gone up significantly, it decreased after accounting for inflation. Mortality rates have significantly decreased which indicates better outcomes are being achieved at a lower cost. Education. The primary and secondary school education is imparted by government, aided and private schools, managed and regulated by the School Education Department of the state. There are urban, rural, and residential schools. As per the child info and school information report (2018–19), there were a total of students, enrolled in schools. students have appeared for the April 2023 Secondary School Certficate (SSC) exam in regular stream. Overall pass percentage was 72.26% with a 100% in 933 schools. In March–April 2023, 379,758 appeared for Intermediate second year examinations. 272,001 candidates amounting to 71% were declared passed. The state initiated education reforms in 2020 by creating six types of schools namely satellite foundation school (pre primary), foundational schools (pre primary - class II), foundational school plus (pre primary - class V) and pre High school (class III - class VII/VIII) and high school (class III - Class X) and high school plus (class III - Class XII). Transition to English medium education in all government schools started in the academic year 2020-2021 is expected to reach completion by 2024–25. 1000 government schools are affiliated to CBSE in the year 2022-23 as an initial step and the bilingual text book scheme was adopted to ease the transition. The state government is going ahead with English medium based on the parents survey despite protests and court cases. The state initiative is being funded in part by loan from World Bank to the tune of $250M over 2021-2026 through "Supporting Andhra's Learning Transformation" (SALT) project to improve the learning outcomes of children up to class II level. There were 510 industrial training institutes (ITI) in the year 2020-21 in Andhra Pradesh with 82 under government management and 417 private management. The total available seats in 2021 were 93,280, out of which, 48.90% seats were filled. 10,053 students completed ITI education in the year 2020. There are 169 government degree colleges and 55 private aided degree colleges in the state. 66 government colleges and 48 private aided colleges have valid NAAC grades. There are 85 government and aided, 175 private polytechnic colleges with a sanctioned strength of 75,906 students. AP state council of higher education organises various entrance tests for different streams and conducts counseling for admissions. AP State Skill Development Corporation is setup to support skill development and placement for the educated. There are total 36 universities, which comprise 3 central universities, 23 state public universities, 6 state private universities, and 4 deemed universities. Andhra university is the oldest of the universities in the state, established in 1926. The government established Rajiv Gandhi University of Knowledge Technologies (RGUKT) in 2008 to cater to the education needs of the rural youth of Andhra Pradesh. Dr. Y.S.R University of Health Sciences oversees medical education in 348 affiliated colleges spanning the entire range from traditional medicine to modern medicine. The public universities including the legacy universities such as Andhra, Sri Venkateswara, and Nagarjuna are suffering from severe funds crunch and staff shortage, managing with only 20% of sanctioned full time staff. Gross Enrollment Ratio (GER) in Higher education for the age group 18-23 for the state is at 35.2% for the year 2019–20, which compares favourably with the GER for all India at 27.1%. With female GER of 35.3 and male GER of 38.2, Gender parity index is 0.84. The corresponding ratio for India is 1.01. Koneru Lakshmaiah Education Foundation University (KL college of engineering) bagged the 50th rank while Andhra University in Visakhapatnam bagged 76th rank in the overall category of India rankings - 2023 as per the National Institute Ranking Framework (NIRF) of Union Ministry of Education. 2,478 institutions including 242 institutions from the state participated in the ranking. Andhra Pradesh has 2510 public libraries including 4 regional libraries, 13 district central libraries under government management. Saraswata Niketanam at Vetapalem in Bapatla district, one of the oldest libraries established under private management in 1918 is losing its attraction as internet spreads. Government is planning to develop digital libraries at village panchayat level. Science and technology. there are 190 science and technology organisations in Andhra Pradesh, including 12 central labs and research institutions. Satish Dhawan Space Centre(SDSC), known as Sriharikota Range (SHAR), at barrier island of Sriharikota in Tirupati district is a satellite launching station operated by Indian Space Research Organisation. It is India's primary orbital launch site. India's lunar orbiter Chandrayaan-1 was launched from the centre on 22 October 2008. Some notable scientists. Yellapragada Subba Rao, a pioneering biochemist hailing from the state, discovered the function of adenosine triphosphate (ATP) as an energy source in the cell and developed drugs for cancer and Filariasis. Yelavarthy Nayudamma a chemical engineer, worked extensively for Central Leather Research Institute at Chennai, and rose to become the director general of Center for Scientific and Industrial Research (CSIR), India C. R. Rao is an Indian-American mathematician and statistician and an alumni of Andhra University. His work on statistics influenced various sciences. Media. The total number of registered newspapers and periodicals in the state for the year 2020-21 were 5,798. There were 1,645 dailies, 817 weeklies, 2,431 monthlies, and 623 fortnightlies. 787 Telugu dailies had a circulation of 9,911,005. 103 English dailies had a circulation of 1,646,453. Eenadu, Sakshi, Andhra Jyothi are the top 3 Telugu daily newspapers widely published from Andhra Pradesh in terms of circulation and top 3 Telugu news sites. BBC Telugu news was launched on 2 October 2017. Several privately owned news media outlets are considered biased towards specific political parties in the state. There were 10 general entertainment channels, 23 news channels, 2 health channels, 6 religious channels, 2 other channels, and 2 cable distribution channels amounting to total of 45 channels empanelled by Andhra Pradesh Information and Public relations department. All India Radio has several channels operating from several locations in the state. Red FM operates from 4 locations. Culture. Andhra Pradesh has 17 geographical indications in categories of agriculture, handicrafts, foodstuff and textiles as per "Geographical Indications of Goods (Registration and Protection) Act, 1999". Some of the GI products are Banaganapalle mangoes, Bandar laddu. Kondapalli Toys, Tirupati Laddu, and saris made in Dharmavaram, and Machilipatnam. Handicrafts. Machilipatnam and Srikalahasti Kalamkari are the two unique textile art forms practised in India. There are other notable handicrafts present in the state, like the soft limestone idol carvings of Durgi. Etikoppaka in Visakhapatnam district is notable for its lac industry, producing lacquered wooden toys. Literature. Nannayya, Tikkana, and Yerrapragada form the trinity who translated the Sanskrit epic "Mahabharata" into Telugu language. Nannayya wrote the first treatise on Telugu grammar called "Andhra Shabda Chintamani" in Sanskrit. Pothana translated "Sri Bhagavatam" into Telugu as "Srimad Maha Bhagavatamu". Vemana was an Indian philosopher. He wrote Telugu poems using simple language and native idioms on a variety of subjects of yoga, wisdom, and morality. Potuluri Veerabrahmendhra swami, a clairvoyant and social reformer wrote Kalagnanam, a book of predictions written in 16th century. Telugu literature after Kandukuri Veeresalingam is termed as Adhunika Telugu Sahityam (Modern Telugu literature). He is known as "Gadya Tikkana" and was the author of Telugu social novel, "Satyavati Charitam". Viswanatha Satyanarayana was conferred with Jnanpith Award. Sri Sri brought new forms of expressionism into Telugu literature. Festivals. Sankranti is the major festival celebrated across the state. Eid is celebrated with special prayers. Rottela Panduga is celebrated at Bara Shaheed Dargah in Nellore with participation across religious lines. Dance, music, and cinema. Kuchipudi, the cultural dance recognized as the official dance form of the state of Andhra Pradesh, originated in the village of Kuchipudi in Krishna district. Many composers of Carnatic music like Annamacharya, Kshetrayya, Tyagaraja, and Bhadrachala Ramadas were of Telugu descent. Modern Carnatic music composers and singers like Ghantasala and M. Balamuralikrishna are of Telugu descent. The Telugu film industry hosted many music composers and playback singers such as S. P. Balasubrahmanyam, P. Susheela, S. Janaki, and P. B. Sreenivas. Folk songs are very important and popular in the many rural areas of the state. Forms such as the "Burra katha" and "Poli" are still performed today. "Harikathaa Kalakshepam (or Harikatha)" involves the narration of a story, intermingled with various songs relating to the story. Harikatha was originated in Andhra. "Burra katha" is an oral storytelling technique with the topic be either a Hindu mythological story or a contemporary social issue. "Rangasthalam" is an Indian theatre in the Telugu language, based predominantly in Andhra Pradesh. Gurajada Apparao wrote the play "Kanyasulkam" in 1892, often considered the greatest play in the Telugu language. C. Pullaiah is cited as the father of Telugu theatre movement. Andhra Pradesh State Film, Television & Theater Development Corporation offers incentives to promote industry. The government is asking film industry to make Vizag as its hub. The Telugu film industry (known as "Tollywood") produces 300 films annually is primarily based in Hyderabad, though several films are shot in Vizag. Film producer D. Ramanaidu holds a Guinness record for the most films produced by a person. In the years 2005, 2006, and 2008, the Telugu film industry produced the largest number of films in India, exceeding the number of films produced in Bollywood. "Naatu Naatu" from the film "RRR" became the first song from an Indian film to win the Academy Award for Best Original Song and the Golden Globe Award for Best Original Song, as well as the first song from an Asian film to win the former. Cuisine. Andhra meal is combination of spicy, tangy, and sweet flavours. Chillies which are produced abundantly in Andhra Pradesh and curry leaves are used copiously in most preparations of curries, chutneys. Various types of Pappu are made using lentils in combination with one of tomato, spinach, gongura, ridge gourd etc. Apart from curries, Pulusu, a stew made using tamarind juice in combination with vegetables, sea food, chicken, mutton etc., is popular. Pachchadi, a paste usually made with combination of groundnuts, fried vegetable, chillies is a must in a meal. Pickles made using Mangos, gooseberries,lemon etc. are enjoyed in combination with Pappu. Buttermilk, and yogurt mixed with rice and eaten toward the end of meal soothes the body especially after eating spicy food items earlier. Ariselu, Burelu, Laddu, and Pootharekulu are some of the sweets made for special festivals and occasions. Tourism. Some of the popular religious pilgrim destinations include Tirumala Venkateswara temple at Tirupati, Srikalahasti temple, Varaha Lakshmi Narasimha temple, Simhachalam, Shahi Jamia Masjid in Adoni, Gunadala Church in Vijayawada, and Buddhist centres at Amaravati and Nagarjuna Konda. Tirumala Venkateswara temple is the world's most visited Hindu temple with footfalls of 30,000-40,000 daily and about 75,000 on new year's eve. The region is home to a variety of other pilgrimage centres, such as the Pancharama Kshetras, Mallikarjuna Jyotirlinga, Kanaka Durga temple and Kodanda Rama temple. The state has several beaches in its coastal districts such as Rushikonda, Mypadu, Suryalanka etc.; caves such as, Borra Caves, Indian rock-cut architecture depicting Undavalli Caves and the country's second longest caves- the Belum Caves. The valleys and hills include, Araku Valley, Horsley Hills, Papi Hills, and Gandikota gorge. Arma Konda peak located in Visakhapatnam district is the highest peak in Eastern Ghats. Museums. The state has 32 museums, which feature a varied collection of ancient sculptures, paintings, idols, weapons, cutlery, and inscriptions, and religious artifacts. Amaravati archaeological museum at Amaravati has several archaeological artefacts. Visakha Museum and Telugu Samskruthika Niketanam in Visakhapatnam display the historical artefacts of the pre-independence era. Bapu museum in Vijayawada displays a large collection of artifacts. Advanced projection mapping with the graphic, animation, and laser displays is used to tell the history of Kondapalli fort utilising the irregular landscapes, ruins, and building's present in the fort as a screen. It was launched in 2019. Archaeological Survey of India identified 135 centrally protected monuments in the state of Andhra Pradesh. These include the reconstructed monuments at Anupu and Nagarjunakonda. Sports. The sports authority of Andhra Pradesh is the governing body which looks after the infrastructure development, coaching, and administration of sports promotion schemes. Dr YSR sports school with classes for 4-10 grades and focus on tapping rural sports talent was established in Putlampalli, YSR district in December 2006. The ACA-VDCA stadium in Visakhapatnam hosted ODI, T20I, and IPL matches. Andhra Pradesh secured 16 medals at the 36th National Games held in 2022. It was ranked twenty first in the competition. It won most medals in athletics. Two silver and one bronze were won in weightlifting. Karnam Malleswari is the first female Indian to win an Olympic medal. Pullela Gopichand is a former Indian badminton player. He won the all England open badminton championship in 2001, becoming the second Indian to win after Prakash Padukone. Srikanth Kidambi, a badminton player, is the first ever Indian to reach the world championships final in 2021 in the men's singles and win a silver medal.
2380
Accelerated Graphics Port
Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express. Advantages over PCI. AGP is a superset of the PCI standard, designed to overcome PCI's limitations in serving the requirements of the era's high-performance graphics cards. The primary advantage of AGP is that it doesn't share the PCI bus, providing a dedicated, point-to-point pathway between the expansion slot(s) and the motherboard chipset. The direct connection also allows for higher clock speeds. The second major change is the use of split transactions, wherein the address and data phases are separated. The card may send many address phases so the host can process them in order, avoiding any long delays caused by the bus being idle during read operations. Third, PCI bus handshaking is simplified. Unlike PCI bus transactions whose length is negotiated on a cycle-by-cycle basis using the FRAME# and STOP# signals, AGP transfers are always a multiple of 8 bytes long, with the total length included in the request. Further, rather than using the IRDY# and TRDY# signals for each word, data is transferred in blocks of four clock cycles (32 words at AGP 8× speed), and pauses are allowed only between blocks. Finally, AGP allows (mandatory only in AGP 3.0) "sideband addressing", meaning that the address and data buses are separated so the address phase does not use the main address/data (AD) lines at all. This is done by adding an extra 8-bit "SideBand Address" bus over which the graphics controller can issue new AGP requests while other AGP data is flowing over the main 32 address/data (AD) lines. This results in improved overall AGP data throughput. This great improvement in memory read performance makes it practical for an AGP card to read textures directly from system RAM, while a PCI graphics card must copy it from system RAM to the card's video memory. System memory is made available using the graphics address remapping table (GART), which apportions main memory as needed for texture storage. The maximum amount of system memory available to AGP is defined as the "AGP aperture". History. The AGP slot first appeared on x86-compatible system boards based on Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors. Intel introduced AGP support with the i440LX Slot 1 chipset on August 26, 1997, and a flood of products followed from all the major system board vendors. The first Socket 7 chipsets to support AGP were the VIA Apollo VP3, SiS 5591/5592, and the ALI Aladdin V. Intel never released an AGP-equipped Socket 7 chipset. FIC demonstrated the first Socket 7 AGP system board in November 1997 as the "FIC PA-2012" based on the VIA Apollo VP3 chipset, followed very quickly by the "EPoX P55-VP3" also based on the VIA VP3 chipset which was first to market. Early video chipsets featuring AGP support included the Rendition Vérité V2200, 3dfx Voodoo Banshee, Nvidia RIVA 128, 3Dlabs PERMEDIA 2, Intel i740, ATI Rage series, Matrox Millennium II, and S3 ViRGE GX/2. Some early AGP boards used graphics processors built around PCI and were simply bridged to AGP. This resulted in the cards benefiting little from the new bus, with the only improvement used being the 66 MHz bus clock, with its resulting doubled bandwidth over PCI, and bus exclusivity. Intel's i740 was explicitly designed to exploit the new AGP feature set; in fact it was designed to texture only from AGP memory, making PCI versions of the board difficult to implement (local board RAM had to emulate AGP memory.) Microsoft first introduced AGP support into "Windows 95 OEM Service Release 2" (OSR2 version 1111 or 950B) via the "USB SUPPLEMENT to OSR2" patch. After applying the patch the Windows 95 system became "Windows 95 version 4.00.950 B". The first Windows NT-based operating system to receive AGP support was Windows NT 4.0 with Service Pack 3, introduced in 1997. Linux support for AGP enhanced fast data transfers was first added in 1999 with the implementation of the AGPgart kernel module. Later use. With the increasing adoption of PCIe, graphics cards manufacturers continued to produce AGP cards as the standard became obsolete. As GPUs began to be designed to connect to PCIe, an additional PCIe-to-AGP bridge-chip was required to create an AGP-compatible graphics card. The inclusion of a bridge, and the need for a separate AGP card design, incurred additional board costs. The GeForce 6600 and ATI Radeon X800 XL, released during 2004–2005, were the first bridged cards. In 2009 AGP cards from Nvidia had a ceiling of the GeForce 7 Series. In 2011 DirectX 10-capable AGP cards from AMD vendors (Club 3D, HIS, Sapphire, Jaton, Visiontek, Diamond, etc.) included the Radeon HD 2400, 3450, 3650, 3850, 4350, 4650, and 4670. The HD 5000 AGP series mentioned in the AMD Catalyst software was never available. There were many problems with the AMD Catalyst 11.2 - 11.6 AGP hotfix drivers under Windows 7 with the HD 4000 series AGP video cards; use of 10.12 or 11.1 AGP hotfix drivers is the recommended workaround. Several of the vendors listed above make available past versions of the AGP drivers. By 2010, no new motherboard chipsets supported AGP and few new motherboards had AGP slots, however some continued to be produced with older AGP-supporting chipsets. In 2016, Windows 10 version 1607 dropped support for AGP. Possible future removal of support for AGP from open source Linux kernel drivers was considered in 2020. Versions. Intel released "AGP specification 1.0" in 1997. It specified 3.3 V signals and 1× and 2× speeds. Specification 2.0 documented 1.5 V signaling, which could be used at 1×, 2× and the additional 4× speed and 3.0 added 0.8 V signaling, which could be operated at 4× and 8× speeds. (1× and 2× speeds are physically possible, but were not specified.) Available versions are listed in the adjacent table. AGP version 3.5 is only publicly mentioned by Microsoft under "Universal Accelerated Graphics Port (UAGP)", which specifies mandatory supports of extra registers once marked optional under AGP 3.0. Upgraded registers include PCISTS, CAPPTR, NCAPID, AGPSTAT, AGPCMD, NISTAT, NICMD. New required registers include APBASELO, APBASEHI, AGPCTRL, APSIZE, NEPG, GARTLO, GARTHI. There are various physical interfaces (connectors); see the Compatibility section. Official extensions. AGP Pro. An official extension for cards that required more electrical power, with a longer slot with additional pins for that purpose. AGP Pro cards were usually workstation-class cards used to accelerate professional computer-aided design applications employed in the fields of architecture, machining, engineering, simulations, and similar fields. 64-bit AGP. A 64-bit channel was once proposed as an optional standard for AGP 3.0 in draft documents, but it was dropped in the final version of the standard. The standard allows 64-bit transfer for AGP8× reads, writes, and fast writes; 32-bit transfer for PCI operations. Unofficial variations. A number of non-standard variations of the AGP interface have been produced by manufacturers. Compatibility. AGP cards are backward and forward compatible within limits. 1.5 V-only keyed cards will not go into 3.3 V slots and vice versa, though "Universal" cards exist which will fit into either type of slot. There are also unkeyed "Universal" slots that will accept either type of card. When an AGP Universal card is plugged-into an AGP Universal slot, only the 1.5 V portion of the card is used. Some cards, like Nvidia's GeForce 6 series (except the 6200) or ATI's Radeon X800 series, only have keys for 1.5 V to prevent them from being installed in older mainboards without 1.5 V support. Some of the last modern cards with 3.3 V support were the Nvidia GeForce FX series (FX 5200, FX 5500, FX 5700, some FX 5800, FX 5900 and some FX 5950), certain GeForce 6 Series and 7 series (few cards were made with 3.3v support except for 6200 where 3.3v support was common) and the ATI Radeon 9500/9700/9800 (R300/R350) (but not 9600/9800(R360/RV360)). Some GeForce 6200/6600/6800 and GeForce 7300/7600/7800/7900/7950 cards will function with AGP 1.0 (3.3v) slots, but those are really uncommon compared to their AGP 1.5v only versions. AGP Pro cards will not fit into standard slots, but standard AGP cards will work in a Pro slot. Motherboards equipped with a Universal AGP Pro slot will accept a 1.5 V or 3.3 V card in either the AGP Pro or standard AGP configuration, a Universal AGP card, or a Universal AGP Pro card. Some cards incorrectly have dual notches, and some motherboards incorrectly have fully open slots, allowing a card to be plugged into a slot that does not support the correct signaling voltage, which may damage card or motherboard. Some incorrectly designed older 3.3 V cards have the 1.5 V key. There are some proprietary systems incompatible with standard AGP; for example, Apple Power Macintosh computers with the Apple Display Connector (ADC) have an extra connector which delivers power to the attached display. Some cards designed to work with a specific CPU architecture (e.g., PC, Apple) may not work with others due to firmware issues. Mark Allen of Playtools.com made the following comments regarding Practical AGP Compatibility for AGP 3.0 and AGP 2.0: Power consumption. Actual power supplied by an AGP slot depends upon the card used. The maximum current drawn from the various rails is given in the specifications for the various versions. For example, if maximum current is drawn from all supplies and all voltages are at their specified upper limits, an AGP 3.0 slot can supply up to 48.25 watts; this figure can be used to specify a power supply conservatively, but in practice a card is unlikely ever to draw more than 40 W from the slot, with many using less. AGP Pro provides additional power up to 110 W. Many AGP cards had additional power connectors to supply them with more power than the slot could provide. Protocol. An AGP bus is a superset of a 66 MHz conventional PCI bus and, immediately after reset, follows the same protocol. The card must act as a PCI target, and optionally may act as a PCI master. (AGP 2.0 added a "fast writes" extension which allows PCI writes from the motherboard to the card to transfer data at higher speed.) After the card is initialized using PCI transactions, AGP transactions are permitted. For these, the card is always the AGP master and the motherboard is always the AGP target. The card queues multiple requests which correspond to the PCI address phase, and the motherboard schedules the corresponding data phases later. An important part of initialization is telling the card the maximum number of outstanding AGP requests which may be queued at a given time. AGP requests are similar to PCI memory read and write requests, but use a different encoding on command lines C/BE[3:0] and are always 8-byte aligned; their starting address and length are always multiples of 8 bytes (64 bits). The three low-order bits of the address are used instead to communicate the length of the request. Whenever the PCI GNT# signal is asserted, granting the bus to the card, three additional status bits ST[2:0] indicate the type of transfer to be performed next. If the bits are codice_1, a previously queued AGP transaction's data is to be transferred; if the three bits are codice_2, the card may begin a PCI transaction or (if sideband addressing is not in use) queue a request in-band using PIPE#. AGP command codes. Like PCI, each AGP transaction begins with an address phase, communicating an address and 4-bit command code. The possible commands are different from PCI, however: AGP 3.0 dropped high-priority requests and the long read commands, as they were little used. It also mandated side-band addressing, thus dropping the dual address cycle, leaving only four request types: low-priority read (0000), low-priority write (0100), flush (1010) and fence (1100). In-band AGP requests using PIPE#. To queue a request in-band, the card must request the bus using the standard PCI REQ# signal, and receive GNT# plus bus status ST[2:0] equal to codice_2. Then, instead of asserting FRAME# to begin a PCI transaction, the card asserts the PIPE# signal while driving the AGP command, address, and length on the C/BE[3:0], AD[31:3] and AD[2:0] lines, respectively. (If the address is 64 bits, a dual address cycle similar to PCI is used.) For every cycle that PIPE# is asserted, the card sends another request without waiting for acknowledgement from the motherboard, up to the configured maximum queue depth. The last cycle is marked by deasserting REQ#, and PIPE# is deasserted on the following idle cycle. Side-band AGP requests using SBA[7:0]. If side-band addressing is supported and configured, the PIPE# signal is not used. (And the signal is re-used for another purpose in the AGP 3.0 protocol, which requires side-band addressing.) Instead, requests are broken into 16-bit pieces which are sent as two bytes across the SBA bus. There is no need for the card to ask permission from the motherboard; a new request may be sent at any time as long as the number of outstanding requests is within the configured maximum queue depth. The possible values are: Sideband address bytes are sent at the same rate as data transfers, up to 8× the 66 MHz basic bus clock. Sideband addressing has the advantage that it mostly eliminates the need for turnaround cycles on the AD bus between transfers, in the usual case when read operations greatly outnumber writes. AGP responses. While asserting GNT#, the motherboard may instead indicate via the ST bits that a data phase for a queued request will be performed next. There are four queues: two priorities (low- and high-priority) for each of reads and writes, and each is processed in order. Obviously, the motherboard will attempt to complete high-priority requests first, but there is no limit on the number of low-priority responses which may be delivered while the high-priority request is processed. For each cycle when the GNT# is asserted and the status bits have the value codice_13, a read response of the indicated priority is scheduled to be returned. At the next available opportunity (typically the next clock cycle), the motherboard will assert TRDY# (target ready) and begin transferring the response to the oldest request in the indicated read queue. (Other PCI bus signals like FRAME#, DEVSEL# and IRDY# remain deasserted.) Up to four clock cycles worth of data (16 bytes at AGP 1× or 128 bytes at AGP 8×) are transferred without waiting for acknowledgement from the card. If the response is longer than that, both the card and motherboard must indicate their ability to continue on the third cycle by asserting IRDY# (initiator ready) and TRDY#, respectively. If either one does not, wait states will be inserted until two cycles after they both do. (The value of IRDY# and TRDY# at other times is irrelevant and they are usually deasserted.) The C/BE# byte enable lines may be ignored during read responses, but are held asserted (all bytes valid) by the motherboard. The card may also assert the RBF# (read buffer full) signal to indicate that it is temporarily unable to receive more low-priority read responses. The motherboard will refrain from scheduling any more low-priority read responses. The card must still be able to receive the end of the current response, and the first four-cycle block of the following one if scheduled, plus any high-priority responses it has requested. For each cycle when GNT# is asserted and the status bits have the value codice_14, write data is scheduled to be sent across the bus. At the next available opportunity (typically the next clock cycle), the card will assert IRDY# (initiator ready) and begin transferring the data portion of the oldest request in the indicated write queue. If the data is longer than four clock cycles, the motherboard will indicate its ability to continue by asserting TRDY# on the third cycle. Unlike reads, there is no provision for the card to delay the write; if it didn't have the data ready to send, it shouldn't have queued the request. The C/BE# lines "are" used with write data, and may be used by the card to select which bytes should be written to memory. The multiplier in AGP 2×, 4× and 8× indicates the number of data transfers across the bus during each 66 MHz clock cycle. Such transfers use source synchronous clocking with a "strobe" signal (AD_STB[0], AD_STB[1], and SB_STB) generated by the data source. AGP 4× adds complementary strobe signals. Because AGP transactions may be as short as two transfers, at AGP 4× and 8× speeds it is possible for a request to complete in the middle of a clock cycle. In such a case, the cycle is padded with dummy data transfers (with the C/BE# byte enable lines held deasserted). Connector pinout. The AGP connector contains almost all PCI signals, plus several additions. The connector has 66 contacts on each side, although 4 are removed for each keying notch. Pin 1 is closest to the I/O bracket, and the B and A sides are as in the table, looking down at the motherboard connector. Contacts are spaced at 1 mm intervals, however they are arranged in two staggered vertical rows so that there is 2 mm space between pins in each row. Odd-numbered A-side contacts, and even-numbered B-side contacts are in the lower row (1.0 to 3.5 mm from the card edge). The others are in the upper row (3.7 to 6.0 mm from the card edge). PCI signals omitted are: Signals added are:
2381
Andreas Aagesen
Andreas Aagesen (5 August 1826 – 26 October 1879) was a Danish jurist. Biography. Aagesen was educated for the law at Christianshavn and Copenhagen, and interrupted his studies in 1848 to take part in the First Schleswig War, in which he served as the leader of a reserve battalion. In 1855 Aagesen became a professor of jurisprudence at the University of Copenhagen. In 1870 he was appointed a member of the commission for drawing up a maritime and commercial code, and the navigation law of 1882 is mainly his work. In 1879 he was elected a member of the Landsting (one of two chambers of the Danish Parliament, the Rigsdagen); but it is as a teacher at the university that he won his reputation. Aagesen was Carl Christian Hall's successor as lecturer on Roman law at the university, and in this department his research was epoch-making. Bibliography. Among his numerous juridical works may be mentioned:
2382
Aalen
Aalen () is a former Free Imperial City located in the eastern part of the German state of Baden-Württemberg, about east of Stuttgart and north of Ulm. It is the seat of the Ostalbkreis district and is its largest town. It is also the largest town in the Ostwürttemberg region. Since 1956, Aalen has had the status of Große Kreisstadt (major district town). It is noted for its many half-timbered houses constructed from the 16th century through the 18th century. With an area of 146.63 km2, Aalen is ranked 7th in Baden-Württemberg and 2nd within the Government Region of Stuttgart, after Stuttgart. With a population of about 66,000, Aalen is the 15th most-populated settlement in Baden-Württemberg. Geography. Situation. Aalen is situated on the upper reaches of the river Kocher, at the foot of the Swabian Jura which lies to the south and south-east, and close to the hilly landscapes of the Ellwangen Hills to the north and the "Welland" to the north-west. The west of Aalen's territory is on the foreland of the eastern Swabian Jura, and the north and north-west is on the Swabian-Franconian Forest, both being part of the Swabian Keuper-Lias Plains. The south-west is part of the Albuch, the east is part of the Härtsfeld, these two both being parts of the Swabian Jura. The Kocher enters the town's territory from Oberkochen to the south, crosses the district of Unterkochen, then enters the town centre, where the "Aal" flows into it. The "Aal" is a small river located only within the town's territory. Next, the Kocher crosses the district of Wasseralfingen, then leaves the town for Hüttlingen. Rivers originating near Aalen are the Rems (near Essingen, west of Aalen) and the Jagst (near Unterschneidheim, east of Aalen), both being tributaries of the Neckar, just like the Kocher. The elevation in the centre of the market square is relative to Normalhöhennull. The territory's lowest point is at the Lein river near Rodamsdörfle, the highest point is the Grünberg's peak near Unterkochen at . Geology. Aalen's territory ranges over all lithostratigraphic groups of the South German Jurassic: Aalen's south and the "Flexner" massif are on top of the White Jurassic, the town centre is on the Brown Jurassic, and a part of Wasseralfingen is on the Black Jurassic. As a result, the town advertises itself as a "Geologist's Mecca". Most parts of the territory are on the "Opalinuston-Formation" (Opalinum Clay Formation) of the Aalenian subdivision of the Jurassic Period, which is named after Aalen. On the "Sandberg", the "Schnaitberg" and the "Schradenberg" hills, all in the west of Aalen, the "Eisensandstein" (Iron Sandstone) formation emerges to the surface. On the other hills of the city, sands "(Goldshöfer Sande)", gravel and residual rubble prevail. The historic centre of Aalen and the other areas in the Kocher valley are founded completely on holocenic floodplain loam "(Auelehm)" and riverbed gravel that have filled in the valley. Most parts of Dewangen and Fachsenfeld are founded on formations of "Jurensismergel" (Jurensis Marl), "Posidonienschiefer" (cf. Posidonia Shale), "Amaltheenton" (Amalthean Clay), "Numismalismergel" (Numismalis Marl) and "Obtususton" (Obtusus Clay, named after Asteroceras obtusum ammonites) moving from south to north, all belonging to the Jurassic and being rich in fossils. They are at last followed by the "Trossingen Formation" already belonging to the Late Triassic. Until 1939 iron ore was mined on the "Braunenberg" hill. (see Tiefer Stollen section). Extent of the borough. The maximum extent of the town's territory amounts to in a north–south dimension and in an east–west dimension. The area is , which includes 42.2% agriculturally used area and 37.7% of forest. 11.5% are built up or vacant, 6.4% is used by traffic infrastructure. Sporting and recreation grounds and parks comprise 1% , other areas 1.1% . Adjacent towns. The following municipalities border on Aalen. They are listed clockwise, beginning south, with their respective linear distances to Aalen town centre given in brackets: Oberkochen (), Essingen (), Heuchlingen (), Abtsgmünd (), Neuler (), Hüttlingen (), Rainau (), Westhausen (), Lauchheim (), Bopfingen () and Neresheim (), all in the Ostalbkreis district, furthermore Heidenheim an der Brenz () and Königsbronn (), both in Heidenheim district. Boroughs. Aalen's territory consists of the town centre "(Kernstadt)" and the municipalities merged from between 1938 (Unterrombach) and 1975 (Wasseralfingen, see mergings section). The municipalities merged in the course of the latest municipal reform of the 1970s are also called "Stadtbezirke" (quarters or districts), and are "Ortschaften" ("settlements") in terms of Baden-Württemberg's "Gemeindeordnung" (municipal code), which means, each of them has its own council elected by its respective residents "(Ortschaftsrat)" and is presided by a spokesperson "(Ortsvorsteher)". The town centre itself and the merged former municipalities consist of numerous villages "(Teilorte)", mostly separated by open ground from each other and having their own independent and long-standing history. Some however have been created as planned communities, which were given proper names, but no well-defined borders. List of villages: Spatial planning. Aalen forms a "Mittelzentrum" ("medium-level centre") within the Ostwürttemberg region. Its designated catchment area includes the following municipalities of the central and eastern Ostalbkreis district: Abtsgmünd, Bopfingen, Essingen, Hüttlingen, Kirchheim am Ries, Lauchheim, Neresheim, Oberkochen, Riesbürg and Westhausen, and is interwoven with the catchment area of Nördlingen, situated in Bavaria, east of Aalen. Climate. As Aalen's territory sprawls on escarpments of the Swabian Jura, on the Albuch and the Härtsfeld landscapes, and its elevation has a range of , the climate varies from district to district. The weather station the following data originate from is located between the town centre and Wasseralfingen at about and has been in operation since 1991. The sunshine duration is about 1800 hours per year, which averages 4.93 hours per day. So Aalen is above the German average of 1550 hours per year. However, with 167 days of precipitation, Aalen's region also ranks above the German average of 138. The annual rainfall is , about the average within Baden-Württemberg. The annual mean temperature is . Here Aalen ranks above the German average of and the Baden-Württemberg average of . History. Civic history. First settlements. Numerous remains of early civilization have been found in the area. Tools made of flint and traces of Mesolithic human settlement dated between the 8th and 5th millennium BC were found on several sites on the margins of the Kocher and Jagst valleys. On the "Schloßbaufeld" plateau (appr. ), situated behind "Kocherburg" castle near Unterkochen, a hill-top settlement was found, with the core being dated to the Bronze Age. In the "Appenwang" forest near Wasseralfingen, in Goldshöfe, and in Ebnat, tumuli of the Hallstatt culture were found. In Aalen and Wasseralfingen, gold and silver coins left by the Celts were found. The Celts were responsible for the fortifications in the Schloßbaufeld settlement consisting of sectional embankments and a stone wall. Also, Near Heisenberg (Wasseralfingen), a Celtic nemeton has been identified; however, it is no longer readily apparent. Roman era. After abandoning the Alb Limes (a "limes" generally following the ridgeline of the Swabian Jura) around 150 AD, Aalen's territory became part of the Roman Empire, in direct vicinity of the then newly erected Rhaetian Limes. The Romans erected a castrum to house the cavalry unit "Ala II Flavia milliaria"; its remains are known today as "Kastell Aalen" ("Aalen Roman fort"). The site is west of today's town centre at the bottom of the "Schillerhöhe" hill. With about 1,000 horsemen and nearly as many grooms, it was the largest fort of auxiliaries along the Rhaetian Limes. There were Civilian settlements adjacent along the south and the east. Around 260 AD, the Romans gave up the fort as they withdrew their presence in unoccupied Germania back to the Rhine and Danube rivers, and the Alamanni took over the region. Based on 3rd- and 4th-century coins found, the civilian settlement continued to exist for the time being. However, there is no evidence of continued civilization between the Roman era and the Middle Ages. Foundation. Based on discovery of alamannic graves, archaeologists have established the 7th century as the origination of Aalen. In the northern and western walls of St. John's church, which is located directly adjacent to the eastern gate of the Roman fort, Roman stones were incorporated. The building that exists today probably dates to the 9th century. The first mention of Aalen was in 839, when emperor Louis the Pious reportedly permitted the Fulda monastery to exchange land with the Hammerstadt village, then known as "Hamarstat". Aalen itself was first mentioned in an inventory list of Ellwangen Abbey, dated ca. 1136, as the village "Alon", along with a lower nobleman named Conrad of Aalen. This nobleman probably had his ancestral castle at a site south of today's town centre and was subject first to Ellwangen abbey, later to the House of Hohenstaufen, and eventually to the House of Oettingen. 1426 was the last time a member of that house was mentioned in connection with Aalen. Documents, from the Middle Ages, indicate that the town of Aalen was founded by the Hohenstaufen some time between 1241 and 1246, but at a different location than the earlier village, which was supposedly destroyed in 1388 during the war between the Alliance of Swabian Cities and the Dukes of Bavaria. Later, it is documented that the counts of Oettingen ruled the town in 1340. They are reported to have pawned the town to Count Eberhard II and subsequently to the House of Württemberg in 1358 or 1359 in exchange for an amount of money. Imperial City. Designation as Imperial City. During the war against Württemberg, Emperor Charles IV took the town without a fight after a siege. On 3 December 1360, he declared Aalen an Imperial City, that is, a city or town responsible only to the emperor, a status that made it a quasi-sovereign city-state and that it kept until 1803. In 1377, Aalen joined the Alliance of Swabian Cities, and in 1385, the term "civitas" appears in the town's seal for the first time. In 1398, Aalen was granted the right to hold markets, and in 1401 Aalen obtained proper jurisdiction. The oldest artistic representation of Aalen was made in 1528. It was made as the basis of a lawsuit between the town and the Counts of Oettingen at the Reichskammergericht in Speyer. It shows Aalen surrounded by walls, towers, and double moats. The layout of the moats, which had an embankment built between them, is recognizable by the present streets named "Nördlicher, Östlicher, Südlicher" and "Westlicher Stadtgraben" (Northern, Eastern, Southern and Western Moat respectively). The wall was about tall, 1518 single paces () long and enclosed an area of . During its early years, the town had two town gates: The "Upper" or "Ellwangen Gate" in the east, and St. Martin's gate in the south; however due to frequent floods, St. Martin's gate was bricked up in the 14th century and replaced by the "Lower" or "Gmünd Gate" built in the west before 1400. Later, several minor side gates were added. The central street market took place on the "Wettegasse" (today called "Marktplatz", "market square") and the "Reichsstädter Straße". So the market district stretched from one gate to the other, however in Aalen it was not straight, but with a 90-degree curve between southern (St. Martin's) gate and eastern (Ellwangen) gate. Around 1500, the civic graveyard was relocated from the town church to St. John's Church, and in 1514, the "Vierundzwanziger" ("Group of 24") was the first assembly constituted by the citizens. Reformation. Delegated by Württemberg's Duke Louis III, on 28 June 1575, nearly 30 years after Martin Luther's death, Jakob Andreae, professor and chancellor of the University of Tübingen, arrived in Aalen. The sermon he gave the following day convinced the mayor, the council, and the citizens to adopt the Reformation in the town. Andreae stayed in Aalen for four weeks to help with the change. This brought along enormous changes, as the council forbade the Roman Catholic priests to celebrate masses and give sermons. However, after victories of the imperial armies at the beginning of the Thirty Years' War, the Prince-Provostry of Ellwangen, which still held the right of patronage in Aalen, were able to temporarily bring Catholicism back to Aalen; however after the military successes of the Protestant Union, Protestant church practices were instituted again. Fire of 1634. On the night of 5 September 1634, two ensigns of the army of Bernard of Saxe-Weimar who were fighting with the Swedes and retreating after the Battle of Nördlingen set fire to two powder carriages, to prevent the war material to fall into Croatian hands and to prevent their advance. The result was a conflagration, that some say destroyed portions of the town. There are differing stories regarding this fire. According to 17th-century accounts, the church and all the buildings, except of the "Schwörturm" tower, were casualties of the fire, and only nine families survived. 19th century research by Hermann Bauer, Lutheran pastor and local historian, discovered that the 17th-century account is exaggerated, but he does agree that the town church and buildings in a "rather large" semicircle around it were destroyed. The fire also destroyed the town archive housed in an addition to the church, with all of its documents. After the fire, soldiers of both armies went through the town looting. It took nearly 100 years for the town to reach its population of 2,000. French troops marched through Aalen in 1688 during the Nine Years' War; however, unlike other places, they left without leaving severe damages. The French came through again in 1702 during the War of the Spanish Succession and in 1741 during the War of the Austrian Succession, the latter also caused imperial troops to move through in 1743. The town church's tower collapsed in 1765, presumably because proper building techniques were not utilized during the reconstruction after the fire of 1634. The collapsing tower struck two children of the tower watchman who died of their injuries, and destroyed the nave, leaving only the altar cross intact. The remaining walls had to be knocked down due to the damage. Reconstruction began the same year, creating the building that exists today. On 22 November 1749, the so-called "Aalen protocol" regulating the cohabitation of Lutherans and Roman Catholics in the jointly ruled territory of Oberkochen was signed in Aalen by the Duchy of Württemberg and the Prince-Provostry of Ellwangen. Aalen had been chosen because of its neutral status as a Free Imperial City. Napoleonic era and end of the Imperial City of Aalen. During the War of the First Coalition (1796), Aalen was looted. The War of the Second Coalition concluded in 1801 with the signing of the Treaty of Lunéville, which led to the German Mediatisation of 1803 that assigned most Imperial Cities to the neighbouring principalities. Aalen was assigned to the Electorate of Württemberg, which later became the Kingdom of Württemberg, and became seat of the District ("Oberamt") of Aalen. During the War of the Third Coalition, on 6 October 1805, Napoleon Bonaparte arrived in Aalen, with an army of 40,000. This event, along with Bavarian and Austrian troops moving in some days later, caused miseries that according to the town clerk "no feather could describe". In 1811, the municipality of Unterrombach was formed out of some villages previously belonging to Aalen, some to the Barons of Wöllwarth, and the eastern villages were assigned to the municipality of Unterkochen. In the age of the Napoleonic wars, the town walls were no longer of use, and in the 18th century, with the maintenance of walls, gates and towers becoming more neglected Finally, due to the fact that the funds were lacking, starting in 1800, most towers were demolished, the other buildings followed soon. Industrial revolution. Before the industrial revolution, Aalen's economy was shaped by its rural setting. Many citizens were pursuing farming besides their craft, such as tanning. In the mid 19th century, there were twelve tanneries in Aalen, due to the proximity of Ulm, an important sales market. Other crafts that added to the economy were weaving mills, which produced linen and woolen goods, and baking of sweet pastry and gingerbread. In Aalen, industrialisation was a slow process. The first major increase was in the 1840s, when three factories for nails and some other factories emerged. It was the link with the railway network, by the opening of the Rems Railway from Cannstatt to Wasseralfingen in 1861, that brought more industry to Aalen, along with the royal steel mill (later "Schwäbische Hüttenwerke") in Wasseralfingen. The Rems Railway's extension to Nördlingen in 1863, the opening of the Brenz Railway in 1864 and of the Upper Jagst Railway in 1866 turned Aalen into a railway hub. Furthermore, between 1901 and its shutdown in 1972, the Härtsfeld Railway connected Aalen with Dillingen an der Donau via Neresheim. Part of becoming a rail hub entailed more jobs based on the rail industry. These included, a maintenance facility, a roundhouse, an administrative office, two track maintenance shops, and a freight station with an industrial branch line. This helped shape Aalen into what today's historians call a "railwayman's town". Starting in 1866, the utilities in town all began to be upgraded. Starting with the Aalen gasworks which were opened and gas lighting was introduced. Then in 1870, a modern water supply system was started and in 1912 the mains electricity. Finally, in 1935, the first electrically powered street lights were installed. To fight housing shortage during and immediately after World War I, the town set up barracks settlement areas at the "Schlauch" and "Alter Turnplatz" grounds. In spite of the industry being crippled by the Great Depression of 1929, the public baths at the Hirschbach creek where modernized, extended and re-opened in 1931. Nazi era. In the federal election of 1932, the Nazi Party performed below average in Aalen with 25.8% of votes compared to 33.1% on the national level, thus finishing second to the Centre Party which had 26.6% (11.9% nationwide) of the votes, and ahead of the Social Democratic Party of Germany with 19.8% (20.4%). However, the March 1933 federal elections showed that the sentiment had changed as the Nazi Party received 34.1% (still below German average 43.9% nationwide), but by far the leading vote-getter in Aalen, followed by the Centre party at 26.6% (11.3% nationwide) and the Social Democrats 18.6% (18.3% nationwide). The democratically elected mayor Friedrich Schwarz remained in office until the Nazis removed him from office, in 1934, and replaced him by chairman of the Nazi Party town council head and brewery owner Karl Barth. Karl Barth was a provisional mayor until the more permanent solution of Karl Schübel. In August 1934, the Nazi consumer fair Braune Messe ("brown fair") was held in Aalen. During Nazi rule in Germany, there were many military offices constructed in Aalen, starting with, in 1936, a military district riding and driving school for Wehrkreis V. The Nazis also built an army replenishment office "(Heeresverpflegungsamt)", a branch arsenal office "(Heeresnebenzeugamt)" and a branch army ammunitions institute "(Heeresnebenmunitionsanstalt)". Starting in 1935, mergers of neighbouring towns began. In 1938, the Oberamt was transformed into the Landkreis of Aalen and the municipality of Unterrombach was disbanded. Its territory was mostly added to Aalen, with the exception of Hammerstadt, which was added to the municipality of Dewangen. Forst, Rauental and Vogelsang were added to Essingen (in 1952 the entire former municipality of Unterrombach was merged into Aalen, with the exception of Forst, which is part of Essingen until present). In September 1944, the "Wiesendorf" concentration camp, a subcamp of Natzweiler-Struthof, was constructed nearby. It was designated for between 200 and 300 prisoners who were utilized for forced labor in industrial businesses nearby. Until the camp's dissolution in February 1945, 60 prisoners died. Between 1946 and 1957, the camp buildings were torn down; however, its foundations are still in place in house "Moltkestraße 44/46". Also, there were several other labour camps which existed where prisoners of war along with women and men from occupied countries occupied by Germany were pooled. The prisoners at these other camps had to work for the arms industry in major businesses like "Schwäbische Hüttenwerke" and the "Alfing Keßler" machine factory. In the civic hospital, the deaconesses on duty were gradually replaced by National Socialist People's Welfare nurses. Nazi eugenics led to compulsory sterilization of some 200 persons there. Fortunately, Aalen avoided most of the combat activity during World War II. It was only during the last weeks of the war that Aalen became a target of air warfare, which led to the destruction and severe damage of parts of the town, the train station, and other railway installations. A series of air attacks lasting for more than three weeks reached its peak on 17 April 1945, when United States Army Air Forces planes bombed the branch arsenal office and the train station. During this raid, 59 people were killed, more than half of them buried by debris, and more than 500 lost their homes. Also, 33 residential buildings, 12 other buildings and 2 bridges were destroyed, and 163 buildings, including 2 churches, were damaged. Five days later, the Nazi rulers of Aalen were unseated by the US forces. Post-war era. Aalen became part of the State of Baden-Württemberg, upon its creation in 1952. Then, with the Baden-Württemberg territorial reform of 1973, the District of Aalen was merged into the Ostalbkreis district. Subsequently, Aalen became seat of that district, and in 1975, the town's borough attained its present size (see below). The population of Aalen exceeded the limit of 20,000, which was the requirement for to gain the status of Große Kreisstadt ("major district town") in 1946. On 1 August 1947, Aalen was declared "Unmittelbare Kreisstadt" ("immediate district town"), and with the creation of the Gemeindeordnung (municipal code) of Baden-Württemberg on 1 April 1956, it was declared "Große Kreisstadt". Religions. On 31 December 2008, 51.1 percent of Aalen were members of the Catholic Church, 23.9 percent were members of the Evangelical-Lutheran Church. About 25 percent belong to other or no religious community or gave no information. The district of Waldhausen was the district with the highest percentage of Roman Catholic inhabitants at 75.6 percent, and the central district was the one with the highest percentage of Evangelical-Lutheran inhabitants at 25.6 percent, as well as those claiming no religious preference at 32.5 percent. Protestantism. Aalen's population originally was subject to the jus patronatus of Ellwangen Abbey, and thus subject to the Roman Catholic Diocese of Augsburg. With the assistance of the Duke of Württemberg, in 1575, the reformation was implemented in Aalen. Subsequently, Aalen has been a predominantly Protestant town for centuries, with the exception of the years from 1628 until 1632 (see reformation section). Being an Imperial City, Aalen could govern its clerical matters on its own, so Clerics, organists and choir masters were direct subjects to the council, which thus exerted bishop-like power. There was even a proper hymn book for Aalen. After the transition to Württemberg, in 1803, Aalen became seat of a deanery, with the dean church being the Town Church (with the building constructed from 1765 to 1767 and existing until present). Another popular church is St. John's Church, located on the cemetery and refurbished in 1561. As Aalen's population grew in the 20th century, more parishes were founded: St. Mark's parish with its church building of 1967 and St. Martin's parish with its church of 1974. In the borough of Unterrombach, Aalen had implemented the reformation as well, but the community remained a chapel-of-ease of Aalen. A proper church, the Christ Church, was erected in 1912 and a proper parish was established in 1947. In Fachsenfeld, the ruling family of Woellwarth resp. of Leinroden implemented the reformation. A parish church was built in 1591, however with an influx of Catholics in the 18th century, a Catholic majority was established. The other districts of present-day Aalen remained mostly catholic after the reformation, however Wasseralfingen established a Lutheran parish in 1891 and a church, St. Magdalene's Church, in 1893. In Unterkochen, after World War II, a parish was established and a church was built in 1960. All four parishes belong to the deanery of Aalen within the Evangelical-Lutheran Church in Württemberg. Furthermore, in Aalen there are Old Pietistic communities. Catholicism. The few Catholics of today's central district were covered by the parish of Unterkochen until the 19th century, a situation which continued for some years even after completion of St. Mary's Church in 1868, which was constructed by Georg Morlok. However, in 1872 Aalen got its proper parish again, and in 1913, a second Catholic church, Salvator's Church, was completed, and in 1969 the Holy Cross Church was also finished. In 1963, a second parish was set up, and in 1972 it got a new Church, the new St. Mary's Church, which has been erected in place of the old St. Mary's church, which had been torn down in 1968. Another church of the second parish was St. Augustine's Church, which was completed in 1970. Finally, in 1976 and 1988, St. Elizabeth's Church and St. Thomas' Church were completed. Furthermore, in 1963, the St. Michael pastoral care office was built. Hofherrnweiler has its own Catholic church, St. Boniface's, since 1904. The villages of Dewangen, Ebnat, Hofen, Waldhausen and Wasseralfingen had remained Catholic after reformation, so old parishes and churches persist there. The "Assumption of Mary" Church in Dewangen has an early Gothic tower and a newly built nave (1875). Mary's Immaculate Conception Church in Ebnat was constructed in 1723; however the church was first mentioned in 1298. Hofen's Saint George's Church is a fortified church, whose current nave was built between 1762 and 1775. Alongside the church, the Late Gothic St. Odile's Chapel is standing, whose entrance has the year 1462 engraved upon it. Foundations of prior buildings have been dated to the 11th and 13th century. St. Mary's Church of Unterkochen was first mentioned in 1248, and has served the Catholics of Aalen for a long time. Waldhausen's parish church of St. Nicholas was built between 1699 and 1716. Wasseralfingen at first was a chapel of ease for Hofen, but has since had its own chapel, St. Stephen, built. It was presumably built in 1353 and remodeled in 1832. In 1834, a proper parish was established, which built a new St. Stephen's Church. This new building utilized the Romanesque Revival architecture style and was built between 1881 and 1883, and has since remained the parish's landmark. Also, Fachsenfeld received its own church, named Sacred Heart in 1895. All Catholic parishes within Aalen are today incorporated into four pastoral care units within the "Ostalb" Deanery of the Diocese of Rottenburg-Stuttgart; however these units also comprise some parishes outside of Aalen. Pastoral Care Unit two comprises the parishes of Essingen, Dewangen and Fachsenfeld, unit four comprises Hofen and Wasseralfingen, unit five comprises both parishes of Aalen's centre and Hofherrnweiler, unit five comprises Waldhausen, Ebnat, Oberkochen and Unterkochen. Other Christian communities. In addition to the two major religions within Aalen, there are also free churches and other communities, including the United Methodist Church, the Baptists, the Seventh-day Adventist Church and the New Apostolic Church. Other religions. Until the late 19th century, no Jews were documented within Aalen. In 1886 there were four Jews were living in Aalen, a number that rose to ten in 1900, fell to seven in 1905, and remained so until 1925. Upon the Nazis' rise to power in 1933, seven Jews, including two children, lived in Aalen. During the Kristallnacht in 1938, the vitrines of the three Jewish shops in the town were smashed and their proprietors imprisoned for several weeks. After their release, most Aalen Jews emigrated. The last Jews of Aalen, Fanny Kahn, was forcibly resettled to Oberdorf am Ipf, which had a large Jewish community. Today, a street of Aalen is named after her. The Jew Max Pfeffer returned from Brussels to Aalen in 1948 to continue his shop, but emigrated to Italy in 1967. In Aalen, there is an Islamic Ditib community, which maintains the "D.I.T.I.B. Mosque of Aalen (Central Mosque)" located at Ulmer Straße. The mosque's construction started on 30 August 2008. The Islamist Millî Görüş organisation maintains the Fatih Mosque, as well at Ulmer Straße. Mergings. The present-day make up of Aalen was created on 21 June 1975 by the unification of the cities of Aalen and Wasseralfingen, with the initial name of "Aalen-Wasseralfingen". This annexation made Aalen's territory one third larger than its prior size. On 1 July 1975, the name "Aalen" was revived. Prior to this merger, the town of Aalen had already annexed the following municipalities: Population's progression and structure. During the Middle Ages and the early modern period, Aalen was just a small town with a few hundred inhabitants. The population grew slowly due to numerous wars, famines and epidemics. It was the beginning of the Industrial Revolution in the 19th century where Aalen's growth accelerated. Whereas in 1803, only 1,932 people inhabited the town, in 1905 it had already increased to 10,442. The number continued to rise and reached 15,890 in 1939. The influx of refugees and ethnic Germans from Germany's former eastern territories after World War II pushed the population to 31,814 in 1961. The merger with Wasseralfingen on 21 June 1975 added 14,597 persons and resulted in a total population of 65,165 people. On 30 June 2005, the population, which was officially determined by the Statistical Office of Baden-Württemberg, was 67,125. The following overview shows how the population figures of the borough were ascertained. Until 1823, the figures are mostly estimates, thereafter census results or official updates by the state statistical office. Starting in 1871, the figures were determined by non-uniform method of tabulation using extrapolation. On 31 December 2008, Aalen had precisely 66,058 inhabitants, of which 33,579 were female and 32,479 were male. The average age of Aalen's inhabitants rose from 40.5 years in 2000 to 42.4 in 2008. Within the borough, 6,312 foreigners resided, which is 9.56 percent. Of them, the largest percentage are from Turkey (38 percent of all foreigners), the second largest group are from Italy (13 percent), followed by Croatians (6 percent) and Serbs (5 percent). The number of married residents fell from 32,948 in 1996 to 31,357 in 2007, while the number of divorced residents rose in the same period from 2,625 to 3,859. The number of single residents slightly increased between 1996 and 2004 from 25,902 to 26,268 and fell slightly until 2007 to 26,147. The number of widowed residents fell from 5,036 in 1996 to 4,783 in 2007. Politics. Aalen has arranged a municipal association with Essingen and Hüttlingen. Council. Since the local election of 25 May 2014, the town council consists of 51 representatives having a term of five years. The seats are distributed as follows on parties and groups (changes refer to the second last election of 2004): Mayors. Since 1374, the mayor and the council maintain the government of the town. In the 16th century, the town had two, sometimes three mayors, and in 1552, the council had 13 members. Later, the head of the administration was reorganized several times. In the Württemberg era, the mayor's title was initially called "Bürgermeister", then from 1819 it was Schultheiß, and since 1947 it is "Oberbürgermeister". The mayor is elected for a term of eight years, and he is chairman and a voting member of the council. He has one deputy with the official title of "Erster Bürgermeister" ("first mayor") and one with the official title of "Bürgermeister" ("mayor"). Heads of town in Aalen since 1802 Coat of arms and flag. Aalen's coat of arms depicts a black eagle with a red tongue on golden background, having a red shield on its breast with a bent silver eel on it. Eagle and eel were first acknowledged as Aalen's heraldic animals in the seal of 1385, with the eagle representing the town's imperial immediacy. After the territorial reform, it was bestowed again by the Administrative District of Stuttgart on 16 November 1976. The coat of arms' blazon reads: "In gold, the black imperial eagle, with a red breast shield applied to it, therein a bent silver eel" "(In Gold der schwarze Reichsadler, belegt mit einem roten Brustschild, darin ein gekrümmter silberner Aal)". Aalen's flag is striped in red and white and contains the coat of arms. The origin of the town's name is uncertain. Matthäus Merian (1593–1650) presumed the name to originate from its location at the Kocher river, where "frequently eels are caught", while "Aal" is German for "eel". Other explanations point to Aalen as the garrison of an ala during the Roman empire, respectively to an abridgement of the Roman name "Aquileia" as a potential name of the Roman fort, a name that nearby Heidenheim an der Brenz bore as well. Another interpretation points to a Celtic word aa meaning "water". Godparenthood. On the occasion of the 1980 "Reichsstädter Tage", Aalen took over godparenthood for the more than 3000 ethnic Germans displaced from the Wischau linguistic enclave. 972 of them settled in Aalen in 1946. The "Wischau Linguistic Enclave Society" "(Gemeinschaft Wischauer Sprachinsel)" regularly organises commemorative meetings in Aalen. Their traditional costumes are stored in the Old Town Hall. Municipal finances. According to the 2007 municipal poll by the Baden-Württemberg chapter of the German Taxpayers Federation, municipal tax revenues totalling to 54,755 million Euros (2006) resp. 62,148 million Euros (2007) face the following debts: Twin towns – sister cities. Aalen is twinned with: The "Twin Towns Society of Aalen" "(Städtepartnerschaftsverein Aalen e. V.)" promotes friendly relations between Aalen and its twin towns, which comprises mutual exchanges of sports and cultural clubs, schools and other civic institutions. On the occasion of the Reichsstädter Tage, from 11 until 13 September 2009 the first conference of twin towns was held. Culture and sights. Theatre. The "Theater der Stadt Aalen" theatre was founded in 1991 and stages 400 to 500 performances a year. Schubart Literary Award. The town endowed the "Schubart Literary Award" "(Schubart-Literaturpreis)" in 1955 in tribute to Christian Friedrich Daniel Schubart, who spent his childhood and youth in Aalen. It is one of the earliest literary awards in Baden-Württemberg and is awarded biennially to German-language writers whose work coincide with Schubart's "liberal and enlightened reasoning". It is compensated with 12,000 Euros. Music. Founded in 1958, the "Music School of the Town of Aalen" today has about 1,500 students taught by 27 music instructors in 30 subjects. In 1977, a symphony orchestra was founded in Aalen, which today is called "Aalener Sinfonieorchester", and consists mostly of instructors and students of the music school. It performs three public concerts annually: The "New Year's Concert" in January, the "Symphony Concert" in July and a "Christmas Concert" in December. Beyond that, music festivals regularly take place in Aalen, like the Aalen Jazzfest. The Aalen volunteer fire department has had a marching band since 1952, whose roots date back to 1883. In 1959, the band received its first glockenspiel from TV host Peter Frankenfeld on the occasion of a TV appearance. A famous German rapper, designer and singer, that goes under the name of Cro, was born in Aalen and lived his early years here. Arts. The "Kunstverein Aalen" was founded in 1983 as a non-profit art association and today is located in the Old Town Hall. The institution with more than 400 members focuses on solo and group exhibitions by international artists. It belongs to the "Arbeitsgemeinschaft Deutscher Kunstvereine" (ADKV), an umbrella organization for non-profit art associations. Museums and memorial sites. Museums. In the central district of Aalen, there are two museums: The "Aalen Limes Museum" "(Limesmuseum Aalen)" is located at the place of the largest Roman cavalry fort north of the Alps until about 200 AD. It opened in 1964. The museum exhibits numerous objects from the Roman era. The ruins of the cavalry fort located beside the museum is open to museum visitors. Every other year, a Roman festival is held in the area of the museum (see below). In the Geological-Paleontological Museum located in the historic town hall, there are more than 1500 fossils from the Swabian Jura, including ammonites, ichthyosaurs and corals, displayed. In the Waldhausen district the "Heimatstüble" museum of local history has an exhibition on agriculture and rural living. In the Wasseralfingen district, there are two more museums: The "Museum Wasseralfingen" comprises a local history exhibition and an art gallery including works of Hermann Plock, Helmut Schuster and Sieger Köder. Also, the stove plate collection of the "Schwäbische Hüttenwerke" steel mill is exhibited, with artists, modellers and the production sequence of a cast plate from design to final product being presented. Memorial sites. There is memorial stone at the "Schillerlinde" tree above Wasseralfingen's ore pit dedicated to four prisoners of the subcamp of Natzweiler-Struthof concentration camp killed there. Also in Wasseralfingen, in the cemetery a memorial with the Polish inscription "To the victims of Hitler" which commemorates the deceased forced labourers buried there. In 1954, on the "Schillerhöhe" hill the town erected a bell tower as a memorial to Aalen's victims of both world wars and to the displacement of ethnic Germans. The tower was planned by Emil Leo, the bell was endowed by Carl Schneider. The tower is open on request. Every evening at 18:45 (before 2003: at 19:45), the memorial's bell rings. Buildings. Churches. The town centre is dominated by the Evangelical-Lutheran St. Nicholas' Church in the heart of the pedestrian area. The church, in its present shape being built between 1765 and 1767, is the only major Late Baroque building in Aalen and is the main church of the Evangelical-Lutheran parish of Aalen. "St. John's Church" is located inside of St. John's cemetery in the western centre. The building presumably is from the 9th century and thus is one of Württemberg's oldest existing churches. The interior features frescos from the early 13th century. For other churches in Aalen, see the Religions section. Historic Town Hall with "Spy". The Historic Town Hall was originally built in the 14th century. After the fire of 1634, it was re-constructed in 1636. This building received a clock from Lauterburg, and the Imperial City of Nuremberg donated a Carillon. It features a figurine of the "Spy of Aalen" and historically displayed other figurines, however the latter ones were lost by a fire in 1884. Since then, the Spy resides inside the reconstructed tower and has become a symbol of the town. The building was used as the town hall until 1907. Since 1977, the Geological-Paleontological Museum resides in the Historic Town Hall. According to legend, the citizens of Aalen owe the "Spy of Aalen" "(Spion von Aalen)" their town having been spared from destruction by the emperor's army: The Imperial City of Aalen once was were in quarrel with the emperor, and his army was shortly before the gates to take the town. The people of Aalen got scared and thus dispatched their "most cunning" one out into the enemy's camp to spy out the strength of their troops. Without any digression, he went straight into the middle of the enemy camp, which inescapably led to him being seized and presented to the emperor. When the emperor asked him what he had lost here, he answered in Swabian German: "Don't frighten, high lords, I just want to peek how many cannons and other war things you've got, since I am the spy of Aalen". The emperor laughed upon such a blatancy and "acted" naïvety, steered him all through the camp and then sent him back home. Soon the emperor withdrew with his army as he thought a town such "wise guys" reside in deserved being spared. Old Town Hall. The earliest record of the Old Town Hall was in 1575. Its outside wall features the oldest known coat of arms, which is of 1664. Until 1851, the building also housed the "Krone-Post" hotel, which coincided with being a station of the Thurn und Taxis postal company. It has housed many notable persons. Thus the so-called "Napoleon Window" with its "N" painted on reminds of the stay of French emperor Napoleon Bonaparte in 1805. According to legend, he rammed his head so hard it bled on this window, when he was startled by the noise of his soldiers ridiculing the "Spy of Aalen". The building was used as Aalen's town hall from 1907 until 1975. Today it houses a cabaret café and the stage of the Theatre of the Town of Aalen. The town has adopted the "Wischau Linguistic Enclave Society" due to their godparenthood and stores their traditional costumes in the building. Bürgerspital. The "Bürgerspital" ("Civic Asylum") is a timber-frame house erected on "Spritzenhausplatz" ("Fire Engine House Square") in 1702. Until 1873, it was used as civic hospital, then, later as a retirement home. After a comprehensive renovation in 1980 it was turned into a senior citizen's community centre. Limes-Thermen. On a slope of the "Langert" mountain, south of the town, the "Limes-Thermen" ("Limes Thermae") hot springs are located. They were built in ancient Roman style and opened in 1985. The health spa is supplied with water about . Market square. The market square is the historic hub of Aalen and runs along about from the town hall in the south to the Historic Town Hall and the Old Town Hall in the north, where it empties into "Radgasse" alley. Since 1809, it is site of the weekly market on Wednesday and Saturday. About in front of the "Reichsstädter Brunnen" fountain at the town hall, the coats of arms of Aalen, its twinned cities and of the Wischau linguistic enclave are paved into the street as mosaic. Market fountain. In 1705, for the water supply of Aalen a well casing was erected at the northern point of the market square, in front of the Historic Town Hall. It was a present of duke Eberhard Louis. The fountain bore a statue of emperor Joseph I., who was enthroned in 1705 and in 1707 renewed Aalen's Imperial City privileges. The fountain was supplied via a wooden pipe. Excessive water was dissipated through ditches branched from Kocher river. When in the early 1870s Aalen's water network was constructed, the fountain was replaced by a smaller fountain about distant. In 1975, the old market fountain was re-erected in baroque style. It bears a replica of the emperor's statue, with the original statue exhibited in the new town hall's lobby. The cast iron casing plates depict the 1718 coat of arms of the Duchy of Württemberg and the coats of arms of Aalen and of the merged municipalities. Reichsstädter Brunnen. The "Reichsstädter Brunnen" fountain ("Imperial Civic Fountain") is located in front of the town hall at the southern point of the market square. It was created by sculptor Fritz Nuss in 1977 to commemorate Aalen's time as an Imperial City (1360–1803). On its circumference is a frieze showing bronze figurines illustrating the town's history. Radgasse. The "Radgasse" ("Wheel Alley") features Aalen's oldest façade. Originally a small pond was on its side. The buildings were erected between 1659 and 1662 for peasants with citizenry privileges and renovated in the mid-1980s. The namesake for the alley was the "Wheel" tavern, which was to be found at the site of today's address "Radgasse 15". Tiefer Stollen. The former iron ore pit "Wilhelm" at Braunenberg hill was converted into the "Tiefer Stollen" tourist mine in order to remind of the old-day miners' efforts and to maintain it as a memorial of early industrialisation in the Aalen area. It has a mining museum open for visitors, and a mine railway takes visitors deep into the mountain. The Town of Aalen, a sponsorship association, and many citizens volunteered several thousand hours of labour to put the mine into its current state. As far as possible, things were left in the original state. In 1989, a sanitary gallery was established where respiratory diseases are treated within rest cures. Thus the Aalen village of Röthard, where the gallery is located, was awarded the title of "Place with sanitary gallery service" in 2004. Observatory. The Aalen Observatory was built in 1969 as school observatory for the Schubart Gymnasium. In 2001, it was converted to a public observatory. Since then, it has been managed by the "Astronomische Arbeitsgemeinschaft Aalen" ("Aalen Astronomical Society"). It is located on Schillerhöhe hill and features two refractive telescopes. They were manufactured by Carl Zeiss AG which has its headquarters in nearby Oberkochen and operates a manufacturing works in Aalen (see below). In the observatory, guided tours and lectures are held regularly. Windpark Waldhausen. The "Windpark Waldhausen" wind farm began operations in early 2007. It consists of seven REpower MM92 wind turbines with a nameplate capacity of 2 MW each. The hub height of each wind turbine is , with a rotor diameter of . Aalbäumle observation tower. The tall "Aalbäumle" observation tower is built atop "Langert" mountain. This popular hiking destination was built in 1898 and was remodelled in 1992. It features a good view over Aalen and the Welland region, up to the Rosenstein mountain and Ellwangen. Beneath the tower, an adventure playground and a cabin is located. A flag on the tower signals whether the cabin's restaurant is open. Natural monuments. The Baden-Württemberg State Institute for Environment, Measurements and Natural Conservation has laid out six protected landscapes in Aalen (the "Swabian Jura escarpment between Lautern and Aalen with adjacent territories", the "Swabian Jura escarpment between Unterkochen and Baiershofen", the "Hilllands around Hofen", the "Kugeltal and Ebnater Tal valleys with parts of Heiligental valley and adjacent territories", "Laubachtal valley" and "Lower Lein Valley with side valleys"), two sanctuary forests ("Glashütte" and "Kocher Origin"), 65 extensive natural monuments, 30 individual natural monuments and the following two protected areas: The large "Dellenhäule" protected area between Aalen's Waldhausen district and Neresheim's Elchingen district, created in 1969, is a sheep pasture with juniper and wood pasture of old willow oaks. The large "Goldshöfer Sande" protected area was established in 2000 and is situated between Aalen's Hofen district and Hüttlingen. The sands on the hill originated from the Early Pleistocene are of geological importance, and the various grove structures offer habitat to severely endangered bird species. Sports. The football team, VfR Aalen, was founded in 1921 and played in the 2nd German League between 2012 and 2015, after which they were relegated to 3. Liga. Its playing venue is the Scholz-Arena situated in the west of the town, which bore the name "Städtisches Waldstadion Aalen" ("Civic Forest Stadium of Aalen") until 2008. From 1939 until 1945, the VfR played in the Gauliga Württemberg, then one of several parallel top-ranking soccer leagues of Germany. The KSV Aalen wrestles in the Wrestling Federal League. It was German champion in team wrestling in 2010. Its predecessor, the "KSV Germania Aalen" disbanded in 2005, was German champion eight times and runner-up five times since 1976. Another Aalen club, the TSV Dewangen, wrestled in the Federal League until 2009. Two American sports, American Football and Baseball, are pursued by the "MTV Aalen". Volleyball has been gaining in popularity in Aalen for years. The first men's team of "DJK Aalen" accomplished qualification for regional league in the season of 2008/09. The "Ostalb" ski lifts are located south of the town centre, at the northern slope of the Swabian Jura. The skiing area comprises two platter lifts that have a vertical rise of , with two runs with lengths of and a beginners' run. Regular events. Reichsstädter Tage. Since 1975, "Reichsstädter Tage" ("Imperial City days") festival is held annually in the town centre on the second weekend in September. It is deemed the largest festival of the Ostwürttemberg region, and is associated with a shopping Sunday in accordance with the Ladenschlussgesetz code. The festival is also attended by delegations from the twinned cities. On the town hall square, on Sunday an ecumenical service is held. Roman Festival. The international Roman Festival "(Römertage)" are held biannially on the site of the former Roman fort and the modern Limes museum. The festival's ninth event in 2008 was attended by around 11,000 people. Aalen Jazz Festival. Annually during the second week of November, the Aalen Jazz Festival brings known and unknown artists to Aalen. It has already featured musicians like Miles Davis, B. B. King, Ray Charles, David Murray, McCoy Tyner, Al Jarreau, Esbjörn Svensson and Albert Mangelsdorff. The festival is complemented by individual concerts in spring and summer, and, including the individual concerts, comprises around 25 concerts with a total of about 13,000 visitors. Economy and infrastructure. In 2008 there were 30,008 employees liable to social insurance living in Aalen. 13,946 (46.5 percent) were employed in the manufacturing sector, 4,715 (15.7 percent) in commerce, catering, hotels and transport, and 11,306 (37.7 percent) in other services. Annually 16,000 employees commute to work, with about 9,000 living in the town and commuting out. Altogether in Aalen there are about 4,700 business enterprises, 1,100 of them being registered in the trade register. The others comprise 2,865 small enterprises and 701 craft enterprises. In Aalen, metalworking is the predominant industry, along with machine-building. Other industries include optics, paper, information technology, chemicals, textiles, medical instruments, pharmaceuticals, and food. Notable enterprises include "SHW Automotive" (originating from the former "Schwäbische Hüttenwerke" steel mills and a mill of 1671 in Wasseralfingen), the "Alfing Kessler" engineering works, the precision tools manufacturer "MAPAL Dr. Kress", the snow chain manufacturer "RUD Ketten Rieger & Dietz" and its subsidiary "Erlau", the "Gesenkschmiede Schneider" forging die smithery, the "SDZ Druck und Medien" media company, the "Papierfabrik Palm" paper mill, the alarm system manufacturer "Telenot", the laser show provider "LOBO electronic" and the textile finisher "Lindenfarb", which all have their seat in Aalen. A branch in Aalen is maintained by optical systems manufacturer Carl Zeiss headquartered in nearby Oberkochen. Transport. Rail. Aalen station is a regional railway hub on the Rems Railway from Stuttgart, the Brenz Railway from Ulm, the Upper Jagst Railway to Crailsheim and the Ries Railway to Donauwörth. Until 1972, the Härtsfeld Railway connected Aalen with Dillingen an der Donau via Neresheim. Other railway stations within the town limits are "Hofen (b Aalen)", "Unterkochen", "Wasseralfingen" and Goldshöfe station. The "Aalen-Erlau" stop situated in the south is no longer operational. Aalen station is served at two-hour intervals by trains of Intercity line 61 Karlsruhe–Stuttgart–Aalen–Nuremberg. For regional rail travel, Aalen is served by various lines of the Interregio-Express, Regional-Express and Regionalbahn categories. Since the beginning of 2019, the British company Go-Ahead took over the regional railway business of DB Regio in the region surrounding Aalen. The town also operates the Aalen industrial railway "(Industriebahn Aalen)", which carries about 250 carloads per year. Bus. Aalen also is a regional hub in the bus network of OstalbMobil, the transport network of the district Aalen is in. The bus lines are operated and serviced by regional companies like OVA and RBS RegioBus Stuttgart. Street. The junctions of "Aalen/Westhausen" and "Aalen/Oberkochen" connect Aalen with the Autobahn A7 (Würzburg–Füssen). Federal roads ("Bundesstraßen") connecting with Aalen are B 19 (Würzburg–Ulm), B 29 (Waiblingen–Nördlingen) and B 290 (Tauberbischofsheim–Westhausen). The Schwäbische Dichterstraße ("Swabian Poets' Route") tourist route established in 1977/78 leads through Aalen. Several bus lines operate within the borough. The "Omnibus-Verkehr Aalen" company is one of the few in Germany that use double-decker buses, it has done so since 1966. A district-wide fare system, "OstalbMobil", has been in effect since 2007. Air transport. Stuttgart Airport, offering international connections, is about away, the travel time by train is about 100 Minutes. At Aalen-Heidenheim Airport, located south-east of Aalen, small aircraft are permitted. Gliding airfields nearby are in Heubach and Bartholomä. Bicycle. Bicycle routes stretching through Aalen are the "Deutscher Limes-Radweg" ("German Limes Bicycle Route") and the "Kocher-Jagst" Bicycle Route. Public facilities. Aalen houses an Amtsgericht (local district court), chambers of the Stuttgart Labour Court, a notary's office, a tax office and an employment agency. It is the seat of the Ostalbkreis district office, of the Aalen Deanery of the Evangelical-Lutheran Church and of the "Ostalb" deanery of the Roman Catholic Diocese of Rottenburg-Stuttgart. The Stuttgart administrative court, the Stuttgart Labour Court and the Ulm Social Welfare Court are in charge for Aalen. Aalen had a civic hospital, which resided in the "Bürgerspital" building until 1873, then in a building at "Alte Heidenheimer Straße". In 1942, the hospital was taken over by the district. The district hospital at the present site of "Kälblesrain", known today as "Ostalb-Klinikum", was opened in 1955. Media. The first local newspaper, "Der Bote von Aalen" ("The Herald of Aalen"), has been published on Wednesdays and Saturdays since 1837. Currently, local newspapers published in Aalen are the "Schwäbische Post", which obtains its supra-regional pages from the Ulm-based Südwestpresse, and the "Aalener Nachrichten" (erstwhile "Aalener Volkszeitung"), a local edition of Schwäbische Zeitung in Leutkirch im Allgäu. Two of Germany's biggest Lesezirkels (magazine rental services) are headquartered in Aalen: "Brabandt LZ Plus Media" and "Lesezirkel Portal". Regional event magazines are "Xaver", "åla", "ålakultur". The commercial broadcasters "Radio Ton" and "Radio 7" have studios in Aalen. Education. A Latin school was first recorded in Aalen in 1447; it was remodeled in 1616 and also later in various buildings that were all situated near the town church, and continued up through the 19th century. In the course of the reformation, a "German school" was established in tandem, being a predecessor of the latter Volksschule school type. In 1860, the "Ritterschule" was built as a "Volksschule" for girls; the building today houses the "Pestalozzischule". In 1866, a new building was erected for the Latin school and for the Realschule established in 1840. This building, later known as the "Alte Gewerbeschule", was torn down in 1975 to free up land for the new town hall. In 1912, the "Parkschule" building was opened. It was designed by Paul Bonatz and today houses the "Schubart-Gymnasium". The biggest educational institution in the town is the "Hochschule Aalen", which was founded in 1962 and focuses on engineering and economics. It is attended by 5000 students on five campuses and employs 129 professors and 130 other lecturers. The town provides three Gymnasiums, four Realschulen, two "Förderschulen" (special schools), six combined Grundschulen and Hauptschulen and eight standalone Grundschulen. The Ostalbkreis district provides three vocational schools and three additional special schools. Finally, six non-state schools of various types exist. The German Esperanto Library (German: "Deutsche Esperanto-Bibliothek", Esperanto: "Germana Esperanto-Biblioteko") has been located in the building of the town library since 1989. TV and radio transmission tower. The Südwestrundfunk broadcasting company operates the Aalen transmission tower on the "Braunenberg" hill. The tower was erected in 1956, it is tall and made of reinforced concrete. Things named after Aalen. The following vehicles are named "Aalen":
2383
Alois Alzheimer
Alois Alzheimer ( , , ; 14 June 1864 – 19 December 1915) was a German psychiatrist and neuropathologist and a colleague of Emil Kraepelin. Alzheimer is credited with identifying the first published case of "presenile dementia", which Kraepelin would later identify as Alzheimer's disease. Early life and education. Alzheimer was born in Marktbreit, Bavaria, on 14 June 1864, the son of Anna Johanna Barbara Sabina and Eduard Román Alzheimer. His father served in the office of notary public in the family's hometown. The Alzheimers moved to Aschaffenburg when Alois was still young in order to give their children an opportunity to attend the Royal Humanistic Gymnasium. After graduating with Abitur in 1883, Alzheimer studied medicine at University of Berlin, University of Tübingen, and University of Würzburg. In his final year at university, he was a member of a fencing fraternity, and even received a fine for disturbing the peace while out with his team. In 1887, Alois Alzheimer graduated from Würzburg as Doctor of Medicine. Career. The following year, he spent five months assisting mentally ill women before he took an office in the city mental asylum in Frankfurt, the Städtische Anstalt für Irre und Epileptische (Asylum for Lunatics and Epileptics). , a noted psychiatrist, was the dean of the asylum. Another neurologist, Franz Nissl, began to work in the same asylum with Alzheimer. Together, they conducted research on the pathology of the nervous system, specifically the normal and pathological anatomy of the cerebral cortex. Alzheimer was the co-founder and co-publisher of the journal "Zeitschrift für die gesamte Neurologie und Psychiatrie", though he never wrote a book that he could call his own. While at the Frankfurt asylum, Alzheimer also met Emil Kraepelin, one of the best-known German psychiatrists of the time. Kraepelin became a mentor to Alzheimer, and the two worked very closely for the next several years. When Kraepelin moved to Munich to work at the Royal Psychiatric Hospital in 1903, he invited Alzheimer to join him. At the time, Kraepelin was doing clinical research on psychosis in senile patients; Alzheimer, on the other hand, was more interested in the lab work of senile illnesses. The two men would face many challenges involving the politics of the psychiatric community. For example, both formal and informal arrangements would be made among psychiatrists at asylums and universities to receive cadavers. In 1904, Alzheimer completed his habilitation at Ludwig Maximilian University of Munich, where he was appointed as a professor in 1908. Afterwards, he left Munich for the Silesian Friedrich Wilhelm University in Breslau in 1912, where he accepted a post as professor of psychiatry and director of the Neurologic and Psychiatric Institute. His health deteriorated shortly after his arrival so that he was hospitalized. Alzheimer died three years later. Auguste Deter. In 1901, Alzheimer observed a patient at the Frankfurt asylum named Auguste Deter. The 51-year-old patient had strange behavioral symptoms, including a loss of short-term memory; she became his obsession over the coming years. Auguste Deter was a victim of the politics of the time in the psychiatric community; the Frankfurt asylum was too expensive for her husband. Herr Deter made several requests to have his wife moved to a less expensive facility, but Alzheimer intervened in these requests. Frau Deter, as she was known, remained at the Frankfurt asylum, where Alzheimer had made a deal to receive her records and brain upon her death, paying for the remainder of her stay in return. On 8 April 1906, Frau Deter died, and Alzheimer had her medical records and brain brought to Munich where he was working in Kraepelin's laboratory. With two Italian physicians, he used the newly developed Bielschowsky stain to identify amyloid plaques and neurofibrillary tangles. These brain anomalies would become identifiers of what later became known as Alzheimer's disease. Findings. Alzheimer discussed his findings on the brain pathology and symptoms of presenile dementia publicly on 3November 1906, at the Tübingen meeting of the Southwest German Psychiatrists. The attendees at this lecture seemed uninterested in what he had to say. The lecturer that followed Alzheimer was to speak on the topic of "compulsive masturbation", which the audience of 88 individuals was so eagerly awaiting that they sent Alzheimer away without any questions or comments on his discovery of the pathology of a type of senile dementia. Following the lecture, Alzheimer published a short paper summarizing his lecture; in 1907 he wrote a longer paper detailing the disease and his findings. The disease would not become known as Alzheimer's disease until 1910, when Kraepelin named it so in the chapter on "Presenile and Senile Dementia" in the 8th edition of his "Handbook of Psychiatry". By 1911, his description of the disease was being used by European physicians to diagnose patients in the US. Contemporaries. American Solomon Carter Fuller gave a report similar to that of Alzheimer at a lecture five months before Alzheimer. Oskar Fischer was a fellow German psychiatrist, 12 years Alzheimer's junior, who reported 12 cases of senile dementia in 1907 around the time that Alzheimer published his short paper summarizing his lecture. Alzheimer and Fischer had different interpretations of the disease, but due to Alzheimer's short life, they never had the opportunity to meet and discuss their ideas. Among the doctors trained by Alois Alzheimer and Emil Kraepelin at Munich in the beginning of the 20th century were the Spanish neuropathologists Nicolás Achúcarro and Gonzalo Rodríguez Lafora, two distinguished disciples of Santiago Ramón y Cajal and members of the Spanish Neurological School. Alzheimer recommended the young and brilliant Nicolás Achúcarro to organize the neuropathological service at the Government Hospital for the Insane, at Washington D.C. (current, NIH), and after two years of work, he was substituted by Gonzalo Rodríguez Lafora. Other interests. Alzheimer was known for having a variety of medical interests including vascular diseases of the brain, early dementia, brain tumors, forensic psychiatry and epilepsy. Alzheimer was a leading specialist in histopathology in Europe. His colleagues knew him to be a dedicated professor and cigar smoker. Personal life and death. In 1894, Alzheimer married Cecilie Simonette Nathalie Geisenheimer, with whom he had three children. Geisenheimer died in 1901. In August 1912, Alzheimer fell ill on the train on his way to the University of Breslau, where he had been appointed professor of psychiatry in July 1912. Most probably he had a streptococcal infection and subsequent rheumatic fever leading to valvular heart disease, heart failure and kidney failure. He did not recover completely from this illness. He died of heart failure on 19 December 1915 at age 51, in Breslau, Silesia (present-day Wrocław, Poland). He was buried on 23 December 1915 next to his wife at the Frankfurt Main Cemetery. Critics and rediscovery. In the early 1990s, critics began to question Alzheimer's findings and form their own hypotheses based on Alzheimer's notes and papers. Amaducci and colleagues hypothesized that Auguste Deter had metachromatic leukodystrophy, a rare condition in which accumulations of fats affect the cells that produce myelin. Claire O'Brien, meanwhile, hypothesized that Auguste Deter actually had a vascular dementing disease.
2384
Aedile
Aedile ( ; , from , "temple edifice") was an elected office of the Roman Republic. Based in Rome, the aediles were responsible for maintenance of public buildings () and regulation of public festivals. They also had powers to enforce public order and duties to ensure the city of Rome was well supplied and its civil infrastructure well maintained, akin to modern local government. There were two pairs of aediles: the first were the "plebeian aediles" (Latin "aediles plebis") and possession of this office was limited to plebeians; the other two were "curule aediles" (Latin "aediles curules"), open to both plebeians and patricians, in alternating years. An "aedilis curulis" was classified as a "magister curulis". The office of the aedilis was generally held by young men intending to follow the "cursus honorum" to high political office, traditionally after their quaestorship but before their praetorship. It was not a compulsory part of the cursus, and hence a former quaestor could be elected to the praetorship without having held the position of aedile. However, it was an advantageous position to hold because it demonstrated the aspiring politician's commitment to public service, as well as giving him the opportunity to hold public festivals and games, an excellent way to increase his name recognition and popularity. History of the office. Plebeian aediles. The plebeian aediles were created in the same year as the tribune of the plebs (494 BC). Originally intended as assistants to the tribunes, they guarded the rights of the plebeians with respect to their headquarters, the Temple of Ceres. Subsequently, they assumed responsibility for maintenance of the city's buildings as a whole. Their duties at first were simply ministerial. They were the assistants to the tribunes in whatever matters that the tribunes might entrust to them, although most matters with which they were entrusted were of minimal importance. Around 446 BC, they were given the authority to care for the decrees of the Senate. When a "senatus consultum" was passed, it would be transcribed into a document and deposited in the public treasury, the "Aerarium". They were given this power because the consuls, who had held this power before, arbitrarily suppressed and altered the documents. They also maintained the acts of the Plebeian Council (People's Assembly), the "plebiscites". Plebiscites, once passed, were also transcribed into a physical document for storage. While their powers grew over time, it is not always easy to distinguish the difference between their powers, and those of the censors. Occasionally, if a censor was unable to carry out one of his tasks, an aedile would perform the task instead. Curule aediles. According to Livy (vi. 42), after the passing of the Licinian rogations in 367 BC, an extra day was added to the Roman games; the plebeian aediles refused to bear the additional expense, whereupon the patricians offered to undertake it, on condition that they were admitted to the aedileship. The plebeians accepted the offer, and accordingly two curule aediles were appointed—at first from the patricians alone, then from patricians and plebeians in turn, lastly, from either—at the Tribal Assembly under the presidency of the consul. Curule aediles, as formal magistrates, held certain honors that plebeian aediles (who were not technically magistrates), did not hold. Besides having the right to sit on a curule seat ("sella curulis") and to wear a toga praetexta, the curule aediles also held the power to issue edicts ("jus edicendi"). These edicts often pertained to matters such as the regulation of the public markets, or what we might call "economic regulation". Livy suggests, perhaps incorrectly, that both curule as well as plebeian Aediles were sacrosanct. Although the curule aediles always ranked higher than the plebeian, their functions gradually approximated and became practically identical. Within five days after the beginning of their terms, the four aediles (two plebeian, two curule) were required to determine, by lot or by agreement among themselves, what parts of the city each should hold jurisdiction over. Differences between the two. There was a distinction between the two sets of aediles when it came to public festivals. Some festivals were plebeian in nature, and thus were under the superintendence of plebeian aediles. Other festivals were supervised exclusively by the curule aediles, and it was often with these festivals that the aediles would spend lavishly. This was often done so as to secure the support of voters in future elections. Because aediles were not reimbursed for any of their public expenditures, most individuals who sought the office were independently wealthy. Since this office was a stepping stone to higher office and the Senate, it helped to ensure that only wealthy individuals (mostly landowners) would win election to high office. These extravagant expenditures began shortly after the end of Second Punic War, and increased as the spoils returned from Rome's new eastern conquests. Even the decadence of the emperors rarely surpassed that of the aediles under the Republic, as could have been seen during Julius Caesar's aedileship. Election to the office. Plebeian aediles and Curule aediles were elected by the Tribal Assembly. Since the plebeian aediles were elected by the plebeians rather than by all of the people of Rome (plebeians as well as patricians), they were not technically magistrates. Before the passage of the "Lex Villia Annalis", individuals could run for the aedileship by the time they turned twenty-seven. After the passage of this law in 180 BC, a higher age was set, probably thirty-six. By the 1st century BC, aediles were elected in July, and took office on the first day in January. Powers of the office. Cicero (Legg. iii. 3, 7) divides these functions under three heads: (1) Care of the city: the repair and preservation of temples, sewers and aqueducts; street cleansing and paving; regulations regarding traffic, dangerous animals and dilapidated buildings; precautions against fire; superintendence of baths and taverns; enforcement of sumptuary laws; punishment of gamblers and usurers; the care of public morals generally, including the prevention of foreign superstitions and the registration of meretrices. They also punished those who had too large a share of the "ager publicus", or kept too many cattle on the state pastures. (2) Care of provisions: investigation of the quality of the articles supplied and the correctness of weights and measures; the purchase of grain for disposal at a low price in case of necessity. (3) Care of the games: superintendence and organization of the public games, as well as of those given by themselves and private individuals (e.g., at funerals) at their own expense. Ambitious persons often spent enormous sums in this manner to win the popular favor with a view to official advancement. Under the Empire. In 44 BC, Julius Caesar added two plebeian aediles called "cereales", whose special duty was the care of the cereal (grain) supply. Under Augustus the office lost much of its importance, its judicial functions and the care of the games being transferred to the praetor, while its city responsibilities were limited by the appointment of an urban prefect. Augustus took for himself its powers over various religious duties. By stripping it of its powers over temples, he effectively destroyed the office, by taking from it its original function. After this point, few people were willing to hold such a powerless office, and Augustus was even known to compel individuals into holding the office. He accomplished this by randomly selecting former tribunes and quaestors for the office. Future emperors would continue to dilute the power of the office by transferring its powers to newly created offices. However, the office did retain some powers over licentiousness and disorder, in particular over the baths and brothels, as well as the registration of prostitutes. In the 3rd century, it disappeared altogether. Under the Empire, Roman colonies and cities often had officials with powers similar to those of the republican aediles, although their powers widely varied. It seems as though they were usually chosen annually. Modern day. Today in Portugal the county mayor can still be referred to as "edil" (e.g. 'O edil de Coimbra', meaning 'the mayor of Coimbra'), a way of reference used also in Brazil and in Romania for any mayors (ex. 'Edil al Bucureștiului', meaning 'mayor of Bucharest'). In Spain (and Latin America) the members of municipal councils are called "concejales" or "ediles". Shakespeare. In his play "Coriolanus", Shakespeare references the aediles. However, they are minor characters, and their chief role is to serve as policemen.
2386
American Airlines
American Airlines is a major US-based airline headquartered in Fort Worth, Texas, within the Dallas–Fort Worth metroplex. It is the largest airline in the world when measured by scheduled passengers carried and revenue passenger mile. American, together with its regional partners and affiliates, operates an extensive international and domestic network with almost 6,800 flights per day to nearly 350 destinations in 48 countries. American Airlines is a founding member of the Northeast Alliance, and also a member of the Oneworld alliance. Regional service is operated by independent and subsidiary carriers under the brand name American Eagle. American Airlines and American Eagle operate out of 10 hubs, with Dallas/Fort Worth International Airport (DFW) being its largest. The airline handles more than 200 million passengers annually with an average of more than 500,000 passengers daily. As of 2021, the company employs 123,400 staff members. History. American Airlines was started in 1930 via a union of more than eighty small airlines. The two organizations from which American Airlines was originated were Robertson Aircraft Corporation and Colonial Air Transport. The former was first created in Missouri in 1921, with both being merged in 1929 into holding company The Aviation Corporation. This, in turn, was made in 1930 into an operating company and rebranded as American Airways. In 1934, when new laws and attrition of mail contracts forced many airlines to reorganize, the corporation redid its routes into a connected system and was renamed American Airlines. The airline fully developed its international business between 1970 and 2000. It purchased Trans World Airlines in 2001. American had a direct role in the development of the Douglas DC-3, which resulted from a marathon telephone call from American Airlines CEO C. R. Smith to Douglas Aircraft Company founder Donald Wills Douglas Sr., when Smith persuaded a reluctant Douglas to design a sleeper aircraft based on the DC-2 to replace American's Curtiss Condor II biplanes. (The existing DC-2's cabin was wide, too narrow for side-by-side berths.) Douglas agreed to go ahead with development only after Smith informed him of American's intention to purchase 20 aircraft. The prototype DST (Douglas Sleeper Transport) first flew on December 17, 1935, the 32nd anniversary of the Wright Brothers' flight at Kitty Hawk. Its cabin was wide, and a version with 21 seats instead of the 14–16 sleeping berths of the DST was given the designation DC-3. There was no prototype DC-3; the first DC-3 built followed seven DSTs off the production line and was delivered to American Airlines. American Airlines inaugurated passenger service on June 26, 1936, with simultaneous flights from Newark, New Jersey, and Chicago, Illinois. American also had a direct role in the development of the DC-10, which resulted from a specification from American Airlines to manufacturers in 1966 to offer a widebody aircraft that was smaller than the Boeing 747, but capable of flying similar long-range routes from airports with shorter runways. McDonnell Douglas responded with the DC-10 trijet shortly after the two companies' merger. On February 19, 1968, the president of American Airlines, George A. Spater, and James S. McDonnell of McDonnell Douglas announced American's intention to acquire the DC-10. American Airlines ordered 25 DC-10s in its first order. The DC-10 made its first flight on August 29, 1970, and received its type certificate from the FAA on July 29, 1971. On August 5, 1971, the DC-10 entered commercial service with American Airlines on a round trip flight between Los Angeles and Chicago. In 2011, due to a downturn in the airline industry, American Airlines' parent company, the AMR Corporation, filed for bankruptcy protection. In 2013, American Airlines merged with US Airways but kept the American Airlines name, as it was the better-recognized brand internationally; the combination of the two airlines resulted in the creation of the largest airline in the United States, and ultimately the world. Destinations and hubs. Destinations. As of July 2022, American Airlines flies to 269 domestic destinations and 81 international destinations in 48 countries (as of January 2022) in five continents. Hubs. American currently operates ten hubs. Alliance and codeshare agreements. American Airlines is a member of the Oneworld alliance and has codeshares with the following airlines: Joint ventures. In addition to the above codeshares, American Airlines has entered into joint ventures with the following airlines: Fleet. As of January 2023, American Airlines operates the largest commercial fleet in the world, comprising 933 aircraft from both Boeing and Airbus, with an additional 161 planned or on order. Over 80% of American's aircraft are narrow-bodies, mainly Airbus A320 series and the Boeing 737-800. It is the largest A320 series aircraft operator in the world, as well as the largest operator of the A319 and A321 variants. It is the fourth-largest operator of 737 family aircraft and second-largest operator of the 737-800 variant. American's wide-body aircraft are all Boeing airliners. It is the third-largest operator of the Boeing 787 series and the sixth-largest operator of the Boeing 777 series. American exclusively ordered Boeing aircraft throughout the 2000s. This strategy shifted on July 20, 2011, when American announced the largest combined aircraft order in history for 460 narrow-body jets including 260 aircraft from the Airbus A320 series. Additional Airbus aircraft joined the fleet in 2013 during the US Airways merger, which operated a nearly all Airbus fleet. On August 16, 2022, American announced that a deal had been confirmed with Boom Supersonic to purchase at least 20 of their Overture supersonic airliners and potentially up to 60 in total. American Airlines operates aircraft maintenance and repair bases at the Charlotte, Chicago O'Hare, Dallas–Fort Worth, Pittsburgh, and Tulsa airports. Only American's widebody planes and its specially-configured Airbus A321T feature seatback entertainment. All other A321 and all Boeing 737 planes were retrofitted with their "Oasis" configuration. While this configuration adds larger overhead bins, it also added more seats, reduced legroom and seat padding, and removed seatback entertainment, which has drawn ire from some travelers. Cabins. Flagship First is American's international and transcontinental first class product. It is offered only on Boeing 777-300ERs and select Airbus A321s which American designates "A321T". The seats are fully lie-flat and offer direct aisle access with only one on each side of the aisle in each row. As with the airline's other premium cabins, Flagship First offers wider food and beverage options, larger seats, and lounge access at certain airports. American offers domestic Flagship First service on transcontinental routes between New York–JFK and Los Angeles, New York–JFK and San Francisco, New York-JFK and Santa Ana, Boston and Los Angeles, and Miami and Los Angeles, as well as on the standard domestic route between New York-JFK and Boston. The airline will debut new Flagship Suite® premium seats and a revamped aircraft interior for its long-haul fleet with fresh deliveries of its Airbus A321XLR and Boeing 787-9 aircraft, beginning in 2024. Flagship Business is American's international and transcontinental business class product. It is offered on all Boeing 777-200ERs, Boeing 777-300ERs, Boeing 787-8s, and Boeing 787-9s, as well as select Airbus A321s. All Flagship Business seats are fully lie-flat. First class is offered on all domestically configured aircraft. Seats range from in width and have of pitch. Dining options include complementary alcoholic and non-alcoholic beverages on all flights as well as standard economy snack offerings, enhanced snack basket selections on flights over , and meals on flights or longer. Premium Economy is American's economy plus product. It is offered on all widebody aircraft. The cabin debuted on the airline's Boeing 787-9s in late 2016 and is also available on Boeing 777-200s and -300s, and Boeing 787-8s. Premium Economy seats are wider than seats in the main cabin (American's economy cabin) and provide more amenities: Premium Economy customers get two free checked bags, priority boarding, and enhanced food and drink service including free alcohol. This product made American Airlines the first U.S. carrier to offer a four-cabin aircraft. Main Cabin Extra is American's enhanced economy product. It is available on all of the mainline fleet and American Eagle aircraft. Main Cabin Extra seats include greater pitch than is available in main cabin, along with free alcoholic beverages and boarding one group ahead of main cabin. American retained Main Cabin Extra when the new Premium Economy product entered service in late 2016. Main Cabin (economy class) is American's economy product and is found on all mainline and regional aircraft in its fleet. Seats range from in width and have of pitch. American markets a number of rows within the main cabin immediately behind Main Cabin Extra as "Main Cabin Preferred", which require an extra charge to select for those without status. American Airlines marketed increased legroom in economy class as "More Room Throughout Coach", also referred to as "MRTC", starting in February 2000. Two rows of economy class seats were removed on domestic narrowbody aircraft, resulting in more than half of all standard economy seats having a pitch of or more. Amid financial losses, this scheme was discontinued in 2004. On many routes, American also offers Basic Economy, the airline's lowest main cabin fare. Basic Economy consists of a Main Cabin ticket with numerous restrictions including waiting until check-in for a seat assignment, no upgrades or refunds, and boarding in the last group. Originally Basic Economy passengers could only carry a personal item, but American later revised their Basic Economy policies to allow for a carry-on bag. In May 2017, American announced it would be adding more seats to some of its Boeing 737 MAX 8 jets and reducing overall legroom in the basic economy class. The last three rows were to lose , going from the current . The remainder of the main cabin was to have of legroom. This "Project Oasis" seating configuration has since been expanded to all 737 MAX 8s as well as standard Boeing 737-800 and non-transcontinental Airbus A321 jets. New Airbus A321neo jets have been delivered with the same configuration. This configuration has been considered unpopular with passengers, especially American's frequent flyers, as the new seats have less padding, less legroom, and no seatback entertainment. Reward programs. AAdvantage. AAdvantage is the frequent flyer program for American Airlines. It was launched on May 1, 1981, and it remains the largest frequent flyer program with over 115 million members as of 2021. Miles accumulated in the program allow members to redeem tickets, upgrade service class, or obtain free or discounted car rentals, hotel stays, merchandise, or other products and services through partners. The most active members, based on the amount and price of travel booked, are designated AAdvantage Gold, AAdvantage Platinum, AAdvantage Platinum Pro, and AAdvantage Executive Platinum elite members, with privileges such as separate check-in, priority upgrade, and standby processing, or free upgrades. They also receive similar privileges from AA's partner airlines, particularly those in oneworld. AAdvantage co-branded credit cards are also available and offer other benefits. The cards are issued by CitiCards, a subsidiary of Citigroup, Barclaycard, and Bilt card in the United States, by several banks including Butterfield Bank and Scotiabank in the Caribbean, and by Banco Santander in Brazil. AAdvantage allows one-way redemption, starting at 7,500 miles. Admirals Club. The Admirals Club was conceived by AA president C.R. Smith as a marketing promotion shortly after he was made an honorary Texas Ranger. Inspired by the Kentucky colonels and other honorary title designations, Smith decided to make particularly valued passengers "admirals" of the "Flagship fleet" (AA called its aircraft "Flagships" at the time). The list of admirals included many celebrities, politicians, and other VIPs, as well as more "ordinary" customers who had been particularly loyal to the airline. There was no physical Admirals Club until shortly after the opening of LaGuardia Airport. During the airport's construction, New York Mayor Fiorello LaGuardia had an upper-level lounge set aside for press conferences and business meetings. At one such press conference, he noted that the entire terminal was being offered for lease to airline tenants; after a reporter asked whether the lounge would be leased as well, LaGuardia replied that it would, and a vice president of AA immediately offered to lease the premises. The airline then procured a liquor license and began operating the lounge as the "Admirals Club" in 1939. The second Admirals Club opened at Washington National Airport. Because it was illegal to sell alcohol in Virginia at the time, the club contained refrigerators for the use of its members, so they could store their liquor at the airport. For many years, membership in the Admirals Club (and most other airline lounges) was by the airline's invitation. After a passenger sued for discrimination, the club switched to a paid membership program in 1974. Flagship Lounge. Though affiliated with the Admirals Club and staffed by many of the same employees, the Flagship Lounge is a separate lounge specifically designed for customers flying in first class and business class on international flights and transcontinental domestic flights, as well as AAdvantage Concierge Key, Executive Platinum, Platinum Pro, and Platinum, as well as Oneworld Emerald and Sapphire frequent flyers. As of June 2023, Flagship Lounges are located at four airports: Chicago-O'Hare, Miami International, Los Angeles, and Dallas/Fort Worth. Flagship Lounges are planned for London-Heathrow and Philadelphia. Corporate affairs. Ownership and structure. American Airlines, Inc., is publicly traded through its parent company, American Airlines Group Inc., under NASDAQ: AAL , with a market capitalization of about $12 billion as of 2019, and is included in the S&P 500 index. American Eagle is a network of six regional carriers that operate under a codeshare and service agreement with American, operating flights to destinations in the United States, Canada, the Caribbean, and Mexico. Three of these carriers are independent and three are subsidiaries of American Airlines Group: Envoy Air Inc., Piedmont Airlines, Inc., and PSA Airlines Inc. Headquarters. American Airlines is headquartered across several buildings in Fort Worth, Texas that it calls the "Robert L. Crandall Campus" in honor of former president and CEO Robert Crandall. The square-foot, five-building office complex called was designed by Pelli Clarke Pelli Architects. The campus is located on 300 acres, adjacent to Dallas/Fort Worth International Airport, American's fortress hub. Before it was headquartered in Texas, American Airlines was headquartered at 633 Third Avenue in the Murray Hill area of Midtown Manhattan, New York City. In 1979, American moved its headquarters to a site at Dallas/Fort Worth International Airport, which affected up to 1,300 jobs. Mayor of New York City Ed Koch described the move as a "betrayal" of New York City. American moved to two leased office buildings in Grand Prairie, Texas. On January 17, 1983, the airline finished moving into a $150 million ($ when adjusted for inflation), facility in Fort Worth; $147 million (about $ when adjusted for inflation) in Dallas/Fort Worth International Airport bonds financed the headquarters. The airline began leasing the facility from the airport, which owns the facility. Following the merger of US Airways and American Airlines, the new company consolidated its corporate headquarters in Fort Worth, abandoning the US Airways headquarters in Phoenix, AZ. As of 2015, American Airlines is the corporation with the largest presence in Fort Worth. In 2015, American announced that it would build a new headquarters in Fort Worth. Groundbreaking began in the spring of 2016 and occupancy completed in September 2019. The airline plans to house 5,000 new workers in the building. It will be located on a property adjacent to the airline's flight academy and conference and training center, west of Texas State Highway 360, west from the current headquarters. The airline will lease a total of from Dallas–Fort Worth International Airport and this area will include the headquarters. Construction of the new headquarters began after the demolition of the Sabre facility, previously on the site. The airline considered developing a new headquarters in Irving, Texas, on the old Texas Stadium site, before deciding to keep the headquarters in Fort Worth. Corporate identity. Logo. In 1931, Goodrich Murphy, an American employee, designed the AA logo as an entry in a logo contest. The eagle in the logo was copied from a Scottish hotel brochure. The logo was redesigned by Massimo Vignelli in 1967. Thirty years later, in 1997, American Airlines was able to make its logo Internet-compatible by buying the domain AA.com. "AA" is also American's two-letter IATA airline designator. On January 17, 2013, American launched a new rebranding and marketing campaign with FutureBrand dubbed, "A New American". This included a new logo, which includes elements of the 1967 logo. American Airlines faced difficulty obtaining copyright registration for their 2013 logo. On June 3, 2016, American Airlines sought to register it with the United States Copyright Office, but in October of that year, the Copyright Office ruled that the logo was ineligible for copyright protection, as it did not pass the threshold of originality, and was thus in the public domain. American requested that the Copyright Office reconsider, but on January 8, 2018, the Copyright Office affirmed its initial determination. After American Airlines submitted additional materials, the Copyright Office reversed its decision on December 7, 2018, and ruled that the logo contained enough creativity to merit copyright protection. Aircraft livery. American's early liveries varied widely, but a common livery was adopted in the 1930s, featuring an eagle painted on the fuselage. The eagle became a symbol of the company and inspired the name of American Eagle Airlines. Propeller aircraft featured an international orange lightning bolt running down the length of the fuselage, which was replaced by a simpler orange stripe with the introduction of jets. In the late 1960s, American commissioned designer Massimo Vignelli to develop a new livery. The original design called for a red, white, and blue stripe on the fuselage, and a simple "AA" logo, without an eagle, on the tail; instead, Vignelli created a highly stylized eagle, which remained the company's logo until January 16, 2013. On January 17, 2013, American unveiled a new livery. Before then, American had been the only major U.S. airline to leave most of its aircraft surfaces unpainted. This was because C. R. Smith would not say he liked painted aircraft and refused to use any liveries that involved painting the entire plane. Robert "Bob" Crandall later justified the distinctive natural metal finish by noting that less paint reduced the aircraft's weight, thus saving on fuel costs. In January 2013, American launched a new rebranding and marketing campaign dubbed, "The New American". In addition to a new logo, American Airlines introduced a new livery for its fleet. The airline calls the new livery and branding "a clean and modern update". The current design features an abstract American flag on the tail, along with a silver-painted fuselage, as a throw-back to the old livery. The new design was painted by Leading Edge Aviation Services in California. Doug Parker, the incoming CEO indicated that the new livery could be short-lived, stating that "maybe we need to do something slightly different than that ... The only reason this is an issue now is that they just did it right in the middle, which kind of makes it confusing, so that gives us an opportunity, actually, to decide if we are going to do something different because we have so many airplanes to paint". The current logo and livery have had mixed criticism, with "Design Shack" editor Joshua Johnson writing that they "boldly and proudly communicate the concepts of American pride and freedom wrapped into a shape that instantly makes you think about an airplane", and "AskThePilot.com" author Patrick Smith describing the logo as 'a linoleum knife poking through a shower curtain'. Later in January 2013, Bloomberg asked the designer of the 1968 American Airlines logo (Massimo Vignelli) on his opinion over the rebranding. In the end, American let their employees decide the new livery's fate. On an internal website for employees, American posted two options, one the new livery and one a modified version of the old livery. All of the American Airlines Group employees (including US Airways and other affiliates) were able to vote. American ultimately decided to keep the new look. Parker announced that American would keep a US Airways and America West heritage aircraft in the fleet, with plans to add a heritage TWA aircraft and a heritage American plane with the old livery. As of September 2019, American has heritage aircraft for Piedmont, PSA, America West, US Airways, Reno Air, TWA, and AirCal in their fleet. They also have two AA branded heritage 737-800 aircraft, an AstroJet N905NN, and the polished aluminum livery used from 1967 to 2013, N921NN. Customer Service. American, both before and after the merger with US Airways, has consistently performed poorly in rankings. The Wall Street Journal's annual airline rankings have ranked American as the worst or second-worst U.S. carrier for ten of the past twelve years, and in the bottom three of U.S. Airlines for at least the past twelve years. The airline has persistently performed poorly in the areas of losing checked luggage and bumping passengers due to oversold flights. Worker relations. The main representatives of key groups of employees are: Concerns and conflicts. Environmental violations. Between October 1993 to July 1998, American Airlines was repeatedly cited for using high-sulfur fuel in motor vehicles at 10 major airports around the country, a violation of the Clean Air Act. Lifetime AAirpass. Since 1981, as a means of creating revenue in a period of loss-making, American Airlines had offered a lifetime pass of unlimited travel, for the initial cost of $250,000. This entitled the pass holder to fly anywhere in the world. Twenty-eight were sold. However, after some time, the airline realized they were making losses on the tickets, with the ticketholders costing them up to $1 million each. Ticketholders were booking large numbers of flights with some ticketholders flying interstate for lunch or flying to London multiple times a month. AA raised the cost of the lifetime pass to $3 million, and then finally stopped offering it in 2003. AA then used litigation to cancel two of the lifetime offers, saying the passes "had been terminated due to fraudulent activity". Discrimination complaints. On October 24, 2017, the NAACP issued a travel advisory for American Airlines urging African Americans to "exercise caution" when traveling with the airline. The NAACP issued the advisory after four incidents. In one incident, a black woman was moved from first class to coach while her white traveling companion was allowed to remain in first class. In another incident, a black man was forced to give up his seats after being confronted by two unruly white passengers. According to the NAACP, while they did receive complaints on other airlines, most of their complaints in the year before their advisory were on American Airlines. In July 2018, the NAACP lifted their travel advisory saying that American has made improvements to mitigate discrimination and unsafe treatment of African Americans. Accidents and incidents. As of March 2019, the airline has had almost sixty aircraft hull losses, beginning with the crash of an American Airways Ford 5-AT-C Trimotor in August 1931. Of these most were propeller driven aircraft, including three Lockheed L-188 Electra turboprop aircraft (of which one, the crash in 1959 of Flight 320, resulted in fatalities). The two accidents with the highest fatalities in both the airline's and U.S. aviation history were Flight 191 in 1979 and Flight 587 in 2001. Out of the 17 hijackings of American Airlines flights, two aircraft were hijacked and destroyed in the September 11 attacks: Flight 11 crashed into the north facade of the North Tower of the World Trade Center, and Flight 77 crashed into the Pentagon; both were bound for LAX from Boston Logan International Airport and Washington Dulles International Airport respectively. Other accidents include the Flight 383 engine failure and fire in 2016. There were two training flight accidents in which the crew were killed and six that resulted in no fatalities. Another four jet aircraft have been written off due to incidents while they were parked between flights or while undergoing maintenance. Carbon footprint. American Airlines reported total CO2e emissions (direct and indirect) for the twelve months ending December 31, 2020, at 20,092 Kt (-21,347 /-51.5% y-o-y). The company aims to achieve net zero carbon emissions by 2050.
2388
Antidepressant
Antidepressants are a class of medications used to treat major depressive disorder, anxiety disorders, chronic pain, and addiction. Common side effects of antidepressants include dry mouth, weight gain, dizziness, headaches, sexual dysfunction, and emotional blunting. There is an increased risk of suicidal thinking and behavior when taken by children, adolescents, and young adults. Discontinuation syndrome, which resembles recurrent depression along with post-SSRI sexual dysfunction (PSSD) in the case of the SSRI class, may occur after stopping the intake of any antidepressant, the effects of which may be permanent and irreversible. Research regarding the effectiveness of antidepressants for depression in adults is controversial and has found both benefits and drawbacks. Meanwhile, evidence of benefit in children and adolescents is attested and inconclusive, even though antidepressant use has considerably increased in children and adolescents since the 2000s due to increased prescriptions by psychiatrists. While a 2018 study found that the 21 most commonly prescribed antidepressant medications were slightly more effective than placebos for the short-term (acute) treatments of adults with major depressive disorder, other research has found that the placebo effect may account for most or all of the drugs' observed efficacy. In addition, other researchers also conclude that anti-depressants ultimately do more harm than good, indicating that they cause permanent neuronal damage, apoptosis and disrupt numerous adaptive processes regulated by serotonin. Research on the effectiveness of antidepressants is generally done on people who have severe symptoms, a population that exhibits much weaker placebo responses, meaning that the results may not be extrapolated to the general population that has not (or has not yet) been diagnosed with anxiety or depression. Medical uses. Antidepressants are prescribed to treat major depressive disorder (MDD), anxiety disorders, chronic pain, and some addictions. Antidepressants are often used in combination with one another. Despite its longstanding prominence in pharmaceutical advertising, the myth that low serotonin levels cause depression is not supported by scientific evidence. Proponents of the monoamine hypothesis of depression recommend choosing an antidepressant which impacts the most prominent symptoms. Under this practice, for example, a person with MDD who is also anxious or irritable would be treated with selective serotonin reuptake inhibitors (SSRIs) or norepinephrine reuptake inhibitors, while a person suffering from loss of energy and enjoyment of life would take a norepinephrine–dopamine reuptake inhibitor. Major depressive disorder. The UK National Institute for Health and Care Excellence (NICE)'s 2022 guidelines indicate that antidepressants should not be routinely used for the initial treatment of mild depression, "unless that is the person's preference". The guidelines recommended that antidepressant treatment be considered: The guidelines further note that in most cases, antidepressants should be used in combination with psychosocial interventions and should be continued for at least six months to reduce the risk of relapse and that SSRIs are typically better tolerated than other antidepressants. American Psychiatric Association (APA) treatment guidelines recommend that initial treatment be individually tailored based on factors including the severity of symptoms, co-existing disorders, prior treatment experience, and the person's preference. Options may include antidepressants, psychotherapy, electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), or light therapy. The APA recommends antidepressant medication as an initial treatment choice in people with mild, moderate, or severe major depression, and that should be given to all people with severe depression unless ECT is planned. Reviews of antidepressants generally find that they benefit adults with depression. On the other hand, some contend that most studies on antidepressant medication are confounded by several biases: the lack of an active placebo, which means that many people in the placebo arm of a double-blind study may deduce that they are not getting any true treatment, thus destroying double-blindness; a short follow up after termination of treatment; non-systematic recording of adverse effects; very strict exclusion criteria in samples of patients; studies being paid for by the industry; selective publication of results. This means that the small beneficial effects that are found may not be statistically significant. Among the 21 most commonly prescribed antidepressants, the most effective and well-tolerated are escitalopram, paroxetine, sertraline, agomelatine, and mirtazapine. For children and adolescents with moderate to severe depressive disorder, some evidence suggests fluoxetine (either with or without cognitive behavioral therapy) is the best treatment, but more research is needed to be certain. Sertraline, escitalopram, and duloxetine may also help reduce symptoms. A 2023 systematic review and meta-analysis of randomized controlled trials of antidepressants for major depressive disorder found that the medications provided only small to doubtful benefits in terms of quality of life. Likewise, a 2022 systematic review and meta-analysis of randomized controlled trials of antidepressants for major depressive disorder in children and adolescents found small though statistically significant improvements in quality of life. Anxiety disorders. For children and adolescents, fluvoxamine is effective in treating a range of anxiety disorders. Fluoxetine, sertraline, and paroxetine can also help with managing various forms of anxiety in children and adolescents. Generalized anxiety disorder. Antidepressants are recommended by the National Institute for Health and Care Excellence (NICE) for the treatment of generalized anxiety disorder (GAD) that has failed to respond to conservative measures such as education and self-help activities. GAD is a common disorder in which the central feature is excessively worrying about numerous events. Key symptoms include excessive anxiety about events and issues going on around them and difficulty controlling worrisome thoughts that persists for at least 6 months. Antidepressants provide a modest to moderate reduction in anxiety in GAD. The efficacy of different antidepressants is similar. Social anxiety disorder. Some antidepressants are used as a treatment for social anxiety disorder, but their efficacy is not entirely convincing, as only a small proportion of antidepressants showed some effectiveness for this condition. Paroxetine was the first drug to be FDA-approved for this disorder. Its efficacy is considered beneficial, although not everyone responds favorably to the drug. Sertraline and fluvoxamine extended-release were later approved for it as well, while escitalopram is used off-label with acceptable efficiency. However, there is not enough evidence to support Citalopram for treating social anxiety disorder, and fluoxetine was no better than a placebo in clinical trials. SSRIs are used as a first-line treatment for social anxiety, but they do not work for everyone. One alternative would be venlafaxine, an SNRI, which has shown benefits for social phobia in five clinical trials against a placebo, while the other SNRIs are not considered particularly useful for this disorder as many of them did not undergo testing for it. , it is unclear if duloxetine and desvenlafaxine can provide benefits for people with social anxiety. However, another class of antidepressants called MAOIs are considered effective for social anxiety, but they come with many unwanted side effects and are rarely used. Phenelzine was shown to be a good treatment option, but its use is limited by dietary restrictions. Moclobemide is a RIMA and showed mixed results, but still received approval in some European countries for social anxiety disorder. TCA antidepressants, such as clomipramine and imipramine, are not considered effective for this anxiety disorder in particular. This leaves out SSRIs such as paroxetine, sertraline, and fluvoxamine CR as acceptable and tolerated treatment options for this disorder. Obsessive–compulsive disorder. SSRIs are a second-line treatment for adult obsessive–compulsive disorder (OCD) with mild functional impairment, and a first-line treatment for those with moderate or severe impairment. In children, SSRIs are considered as a second-line therapy in those with moderate-to-severe impairment, with close monitoring for psychiatric adverse effects. Sertraline and fluoxetine are effective in treating OCD for children and adolescents. Clomipramine, a TCA drug, is considered effective and useful for OCD. However, it is used as a second-line treatment because it is less well-tolerated than SSRIs. Despite this, it has not shown superiority to fluvoxamine in trials. All SSRIs can be used effectively for OCD. SNRI use may also be attempted, though no SNRIs have been approved for the treatment of OCD. Despite these treatment options, many patients remain symptomatic after initiating the medication, and less than half achieve remission. Post–traumatic stress disorder. Antidepressants are one of the treatment options for PTSD. However, their efficacy is not well established. Paroxetine and sertraline have been FDA approved for the treatment of PTSD. Paroxetine has slightly higher response and remission rates than sertraline for this condition. However, neither drug is considered very helpful for a broad patient demographic. Fluoxetine and venlafaxine are used off-label. Fluoxetine has produced unsatisfactory mixed results. Venlafaxine showed response rates of 78%, which is significantly higher than what paroxetine and sertraline achieved. However, it did not address as many symptoms of PTSD as paroxetine and sertraline, in part due to the fact that venlafaxine is an SNRI. This class of drugs inhibits the reuptake of norepinephrine, which may cause anxiety in some patients. Fluvoxamine, escitalopram, and citalopram were not well-tested for this disorder. MAOIs, while some of them may be helpful, are not used much because of their unwanted side effects. This leaves paroxetine and sertraline as acceptable treatment options for some people, although more effective antidepressants are needed. Panic disorder. Panic disorder is treated relatively well with medications compared to other disorders. Several classes of antidepressants have shown efficacy for this disorder, with SSRIs and SNRIs used first-line. Paroxetine, sertraline, and fluoxetine are FDA-approved for panic disorder, while fluvoxamine, escitalopram, and citalopram are also considered effective for them. SNRI venlafaxine is also approved for this condition. Unlike social anxiety and PTSD, some TCAs antidepressants, like clomipramine and imipramine, have shown efficacy for panic disorder. Moreover, the MAOI phenelzine is also considered useful. Panic disorder has many drugs for its treatment. However, the starting dose must be lower than the one used for major depressive disorder because people have reported an increase in anxiety as a result of starting the medication. In conclusion, while panic disorder's treatment options seem acceptable and useful for this condition, many people are still symptomatic after treatment with residual symptoms. Eating disorders. Antidepressants are recommended as an alternative or additional first step to self-help programs in the treatment of bulimia nervosa. SSRIs (fluoxetine in particular) are preferred over other antidepressants due to their acceptability, tolerability, and superior reduction of symptoms in short-term trials. Long-term efficacy remains poorly characterized. Bupropion is not recommended for the treatment of eating disorders, due to an increased risk of seizure. Similar recommendations apply to binge eating disorder. SSRIs provide short-term reductions in binge eating behavior, but have not been associated with significant weight loss. Clinical trials have generated mostly negative results for the use of SSRIs in the treatment of anorexia nervosa. Treatment guidelines from the National Institute of Health and Care Excellence (NICE) recommend against the use of SSRIs in this disorder. Those from the American Psychiatric Association (APA) note that SSRIs confer no advantage regarding weight gain, but may be used for the treatment of co-existing depressive, anxiety, or obsessive–compulsive disorders. Pain. Fibromyalgia. A 2012 meta-analysis concluded that antidepressant treatment favorably affects pain, health-related quality of life, depression, and sleep in fibromyalgia syndrome. Tricyclics appear to be the most effective class, with moderate effects on pain and sleep, and small effects on fatigue and health-related quality of life. The fraction of people experiencing a 30% pain reduction on tricyclics was 48%, versus 28% on placebo. For SSRIs and SNRIs, the fractions of people experiencing a 30% pain reduction were 36% (20% in the placebo comparator arms) and 42% (32% in the corresponding placebo comparator arms) respectively. Discontinuation of treatment due to side effects was common. Antidepressants including amitriptyline, fluoxetine, duloxetine, milnacipran, moclobemide, and pirlindole are recommended by the European League Against Rheumatism (EULAR) for the treatment of fibromyalgia based on "limited evidence". Neuropathic pain. A 2014 meta-analysis from the Cochrane Collaboration found the antidepressant duloxetine to be effective for the treatment of pain resulting from diabetic neuropathy. The same group reviewed data for amitriptyline in the treatment of neuropathic pain and found limited useful randomized clinical trial data. They concluded that the long history of successful use in the community for the treatment of fibromyalgia and neuropathic pain justified its continued use. The group was concerned about the potential overestimation of the amount of pain relief provided by amitriptyline, and highlighted that only a small number of people will experience significant pain relief by taking this medication. Other uses. Antidepressants may be modestly helpful for treating people who have both depression and alcohol dependence, however, the evidence supporting this association is of low quality. Bupropion is used to help people stop smoking. Antidepressants are also used to control some symptoms of narcolepsy. Antidepressants may be used to relieve pain in people with active rheumatoid arthritis. However, further research is required. Antidepressants have been shown to be superior to placebo in treating depression in individuals with physical illness, although reporting bias may have exaggerated this finding. Limitations and strategies. Among individuals treated with a given antidepressant, between 30% and 50% do not show a response. Approximately one-third of people achieve a full remission, one-third experience a response, and one-third are non-responders. Partial remission is characterized by the presence of poorly defined residual symptoms. These symptoms typically include depressed mood, anxiety, sleep disturbance, fatigue, and diminished interest or pleasure. It is currently unclear which factors predict partial remission. However, it is clear that residual symptoms are powerful predictors of relapse, with relapse rates three to six times higher in people with residual symptoms than in those, who experience full remission. In addition, antidepressant drugs tend to lose efficacy throughout long-term maintenance therapy. According to data from the Centers for Disease Control and Prevention, less than one-third of Americans taking one antidepressant medication have seen a mental health professional in the previous year. Several strategies are used in clinical practice to try to overcome these limits and variations. They include switching medication, augmentation, and combination. There is controversy amongst researchers regarding the efficacy and risk-benefit ratio of antidepressants. Although antidepressants consistently out-perform a placebo in meta-analyses, the difference is modest and it is not clear that their statistical superiority results in clinical efficacy. The aggregate effect of antidepressants typically results in changes below the threshold of clinical significance on depression rating scales. Proponents of antidepressants counter that the most common scale, the HDRS, is not suitable for assessing drug action, that the threshold for clinical significance is arbitrary, and that antidepressants consistently result in significantly raised scores on the mood item of the scale. Assessments of antidepressants using alternative, more sensitive scales, such as the MADRS, do not result in marked difference from the HDRS and likewise only find a marginal clinical benefit. Another hypothesis proposed to explain the poor performance of antidepressants in clinical trials is a high treatment response heterogeneity. Some patients, that differ strongly in their response to antidepressants, could influence the average response, while the heterogeneity could itself be obscured by the averaging. Studies have not supported this hypothesis, but it is very difficult to measure treatment effect heterogeneity. Poor and complex clinical trial design might also account for the small effects seen for antidepressants. The randomized controlled trials used to approve drugs are short, and may not capture the full effect of antidepressants. Additionally, the placebo effect might be inflated in these trials by frequent clinical consultation, lowering the comparative performance of antidepressants. Critics agree that current clinical trials are poorly-designed, which limits the knowledge on antidepressant. More naturalistic studies, such as STAR*D, have produced results, which suggest that antidepressants may be less effective in clinical practice than in randomized controlled trials. Critics of antidepressants maintain that the superiority of antidepressants over placebo is the result of systemic flaws in clinical trials and the research literature. Trials conducted with industry involvement tend to produce more favorable results, and accordingly many of the trials included in meta-analyses are at high risk of bias. Additionally, meta-analyses co-authored by industry employees find more favorable results for antidepressants. The results of antidepressant trials are significantly more likely to be published if they are favorable, and unfavorable results are very often left unpublished or misreported, a phenomenon called publication bias or selective publication. Although this issue has diminished with time, it remains an obstacle to accurately assessing the efficacy of antidepressants. Misreporting of clinical trial outcomes and of serious adverse events, such as suicide, is common. Ghostwriting of antidepressant trials is widespread, a practice in which prominent researchers, or so-called key opinion leaders, attach their names to studies actually written by pharmaceutical company employees or consultants. A particular concern is that the psychoactive effects of antidepressants may lead to the unblinding of participants or researchers, enhancing the placebo effect and biasing results. Some have therefore maintained that antidepressants may only be active placebos. When these and other flaws in the research literature are not taken into account, meta-analyses may find inflated results on the basis of poor evidence. Critics contend that antidepressants have not been proven sufficiently effective by RCTs or in clinical practice and that the widespread use of antidepressants is not evidence-based. They also note that adverse effects, including withdrawal difficulties, are likely underreported, skewing clinicians' ability to make risk-benefit judgements. Accordingly, they believe antidepressants are overused, particularly for non-severe depression and conditions in which they are not indicated. Critics charge that the widespread use and public acceptance of antidepressants is the result of pharmaceutical advertising, research manipulation, and misinformation. Current mainstream psychiatric opinion recognizes the limitations of antidepressants but recommends their use in adults with more severe depression as a first-line treatment. Switching antidepressants. The American Psychiatric Association 2000 Practice Guideline advises that where no response is achieved within the following six to eight weeks of treatment with an antidepressant, switch to an antidepressant in the same class, and then to a different class. A 2006 meta-analysis review found wide variation in the findings of prior studies: for people who had failed to respond to an SSRI antidepressant, between 12% and 86% showed a response to a new drug. However, the more antidepressants an individual had previously tried, the less likely they were to benefit from a new antidepressant trial. However, a later meta-analysis found no difference between switching to a new drug and staying on the old medication: although 34% of treatment-resistant people responded when switched to the new drug, 40% responded without being switched. Augmentation and combination. For a partial response, the American Psychiatric Association (APA) guidelines suggest augmentation or adding a drug from a different class. These include lithium and thyroid augmentation, dopamine agonists, sex steroids, NRIs, glucocorticoid-specific agents, or the newer anticonvulsants. A combination strategy involves adding another antidepressant, usually from a different class to affect other mechanisms. Although this may be used in clinical practice, there is little evidence for the relative efficacy or adverse effects of this strategy. Other tests conducted include the use of psychostimulants as an augmentation therapy. Several studies have shown the efficacy of combining modafinil for treatment-resistant people. It has been used to help combat SSRI-associated fatigue. Long-term use and stopping. The effects of antidepressants typically do not continue once the course of medication ends. This results in a high rate of relapse. In 2003, a meta-analysis found that 18% of people who had responded to an antidepressant relapsed while still taking it, compared to 41% whose antidepressant was switched for a placebo. A gradual loss of therapeutic benefit occurs in a minority of people during the course of treatment. A strategy involving the use of pharmacotherapy in the treatment of the acute episode, followed by psychotherapy in its residual phase, has been suggested by some studies. For patients who wish to stop their antidepressants, engaging in brief psychological interventions such as Preventive Cognitive Therapy or mindfulness-based cognitive therapy while tapering down has been found to diminish the risk for relapse. Adverse effects. Antidepressants can cause various adverse effects, depending on the individual and the drug in question. Almost any medication involved with serotonin regulation has the potential to cause serotonin toxicity (also known as "serotonin syndrome") — an excess of serotonin that can induce mania, restlessness, agitation, emotional lability, insomnia, and confusion as its primary symptoms. Although the condition is serious, it is not particularly common, generally only appearing at high doses or while on other medications. Assuming proper medical intervention has been taken (within about 24 hours) it is rarely fatal. Antidepressants appear to increase the risk of diabetes by about 1.3-fold. MAOIs tend to have pronounced (sometimes fatal) interactions with a wide variety of medications and over-the-counter drugs. If taken with foods that contain very high levels of tyramine (e.g., mature cheese, cured meats, or yeast extracts), they may cause a potentially lethal hypertensive crisis. At lower doses, the person may only experience a headache due to an increase in blood pressure. In response to these adverse effects, a different type of MAOI, the class of reversible inhibitor of monoamine oxidase A (RIMA), has been developed. The primary advantage of RIMAs is that they do not require the person to follow a special diet while being purportedly effective as SSRIs and tricyclics in treating depressive disorders. Tricyclics and SSRI can cause the so-called drug-induced QT prolongation, especially in older adults; this condition can degenerate into a specific type of abnormal heart rhythm called Torsades de points, which can potentially lead to sudden cardiac arrest. Some antidepressants are also believed to increase thoughts of suicidal ideation. Pregnancy. SSRI use in pregnancy has been associated with a variety of risks with varying degrees of proof of causation. As depression is independently associated with negative pregnancy outcomes, determining the extent to which observed associations between antidepressant use and specific adverse outcomes reflect a causative relationship has been difficult in some cases. In other cases, the attribution of adverse outcomes to antidepressant exposure seems fairly clear. SSRI use in pregnancy is associated with an increased risk of spontaneous abortion of about 1.7-fold, and is associated with preterm birth and low birth weight. A systematic review of the risk of major birth defects in antidepressant-exposed pregnancies found a small increase (3% to 24%) in the risk of major malformations and a risk of cardiovascular birth defects that did not differ from non-exposed pregnancies. A study of fluoxetine-exposed pregnancies found a 12% increase in the risk of major malformations that did not reach statistical significance. Other studies have found an increased risk of cardiovascular birth defects among depressed mothers not undergoing SSRI treatment, suggesting the possibility of ascertainment bias, e.g. that worried mothers may pursue more aggressive testing of their infants. Another study found no increase in cardiovascular birth defects and a 27% increased risk of major malformations in SSRI exposed pregnancies. The FDA advises for the risk of birth defects with the use of paroxetine and the MAOI should be avoided. A 2013 systematic review and meta-analysis found that antidepressant use during pregnancy was statistically significantly associated with some pregnancy outcomes, such as gestational age and preterm birth, but not with other outcomes. The same review cautioned that because differences between the exposed and unexposed groups were small, it was doubtful whether they were clinically significant. A neonate (infant less than 28 days old) may experience a withdrawal syndrome from abrupt discontinuation of the antidepressant at birth. Antidepressants can be present in varying amounts in breast milk, but their effects on infants are currently unknown. Moreover, SSRIs inhibit nitric oxide synthesis, which plays an important role in setting the vascular tone. Several studies have pointed to an increased risk of prematurity associated with SSRI use, and this association may be due to an increased risk of pre-eclampsia during pregnancy. Antidepressant-induced mania. Another possible problem with antidepressants is the chance of antidepressant-induced mania or hypomania in people with or without a diagnosis of bipolar disorder. Many cases of bipolar depression are very similar to those of unipolar depression. Therefore, the person can be misdiagnosed with unipolar depression and be given antidepressants. Studies have shown that antidepressant-induced mania can occur in 20–40% of people with bipolar disorder. For bipolar depression, antidepressants (most frequently SSRIs) can exacerbate or trigger symptoms of hypomania and mania. Suicide. Studies have shown that the use of antidepressants is correlated with an increased risk of suicidal behavior and thinking (suicidality) in those aged under 25 years old. This problem has been serious enough to warrant government intervention by the US Food and Drug Administration (FDA) to warn of the increased risk of suicidality during antidepressant treatment. According to the FDA, the heightened risk of suicidality occurs within the first one to two months of treatment. The National Institute for Health and Care Excellence (NICE) places the excess risk in the "early stages of treatment". A meta-analysis suggests that the relationship between antidepressant use and suicidal behavior or thoughts is age-dependent. Compared with placebo, the use of antidepressants is associated with an increase in suicidal behavior or thoughts among those 25 years old or younger (OR=1.62). A review of RCTs and epidemiological studies by Healy and Whitaker found an increase in suicidal acts by a factor of 2.4. There is no effect or possibly a mild protective effect among those aged 25 to 64 (OR=0.79). Antidepressant treatment has a protective effect against suicidality among those aged 65 and over (OR=0.37). Sexual dysfunction. Sexual side effects are also common with SSRIs, such as loss of sexual drive, failure to reach orgasm, and erectile dysfunction. Although usually reversible, these sexual side-effects can, in rare cases, continue after the drug has been completely withdrawn. In a study of 1,022 outpatients, overall sexual dysfunction with all antidepressants averaged 59.1% with SSRI values between 57% and 73%, mirtazapine 24%, nefazodone 8%, amineptine 7%, and moclobemide 4%. Moclobemide, a selective reversible MAO-A inhibitor, does not cause sexual dysfunction and can lead to an improvement in all aspects of sexual function. Biochemical mechanisms suggested as causative include increased serotonin, particularly affecting 5-HT2 and 5-HT3 receptors; decreased dopamine; decreased norepinephrine; blockade of cholinergic and α1adrenergic receptors; inhibition of nitric oxide synthetase; and elevation of prolactin levels. Mirtazapine is reported to have fewer sexual side effects, most likely because it antagonizes 5-HT2 and 5-HT3 receptors and may, in some cases, reverse sexual dysfunction induced by SSRIs by the same mechanism. Bupropion, a weak NDRI and nicotinic antagonist, may be useful in treating reduced libido as a result of SSRI treatment. Emotional blunting. Certain antidepressants may cause emotional blunting, characterized by a reduced intensity of both positive and negative emotions as well as symptoms of apathy, indifference, and amotivation. It may be experienced as either beneficial or detrimental depending on the situation. This side effect has been particularly associated with serotonergic antidepressants like SSRIs and SNRIs but may be less with atypical antidepressants like bupropion, agomelatine, and vortioxetine. Higher doses of antidepressants seem to be more likely to produce emotional blunting than lower doses. Emotional blunting can be decreased by reducing dosage, discontinuing the medication, or switching to a different antidepressant that may have less propensity for causing this side effect. Changes in weight. Changes in appetite or weight are common among antidepressants but are largely drug-dependent and related to which neurotransmitters they affect. Mirtazapine and paroxetine, for example, may be associated with weight gain and/or increased appetite, while others (such as bupropion and venlafaxine) achieve the opposite effect. The antihistaminic properties of certain TCA- and TeCA-class antidepressants have been shown to contribute to the common side effects of increased appetite and weight gain associated with these classes of medication. Bone loss. A 2021 nationwide cohort study in South Korea observed a link between SSRI use and bone loss, particularly in recent users. The study also stressed the need of further research to better understand these effects. A 2012 review found that SSRIs along with tricyclic antidepressants were associated with a significant increase in the risk of osteoporotic fractures, peaking in the months after initiation, and moving back towards baseline during the year after treatment was stopped. These effects exhibited a dose–response relationship within SSRIs which varied between different drugs of that class. A 2018 meta-analysis of 11 small studies found a reduction in bone density of the lumbar spine in SSRI users which affected older people the most. Risk of death. A 2017 meta-analysis found that antidepressants were associated with a significantly increased risk of death (+33%) and new cardiovascular complications (+14%) in the general population. Conversely, risks were not greater in people with existing cardiovascular disease. Discontinuation syndrome. Antidepressant discontinuation syndrome, also called antidepressant withdrawal syndrome, is a condition that can occur following the interruption, reduction, or discontinuation of antidepressant medication. The symptoms may include flu-like symptoms, trouble sleeping, nausea, poor balance, sensory changes, and anxiety. The problem usually begins within three days and may last for several months. Rarely psychosis may occur. A discontinuation syndrome can occur after stopping any antidepressant including selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), and tricyclic antidepressants (TCAs). The risk is greater among those who have taken the medication for longer and when the medication in question has a short half-life. The underlying reason for its occurrence is unclear. The diagnosis is based on the symptoms. Methods of prevention include gradually decreasing the dose among those who wish to stop, though it is possible for symptoms to occur with tapering. Treatment may include restarting the medication and slowly decreasing the dose. People may also be switched to the long-acting antidepressant fluoxetine, which can then be gradually decreased. Approximately 20–50% of people who suddenly stop an antidepressant develop an antidepressant discontinuation syndrome. The condition is generally not serious. Though about half of people with symptoms describe them as severe. Some restart antidepressants due to the severity of the symptoms. Pharmacology. Antidepressants act via a large number of different mechanisms of action. This includes serotonin reuptake inhibition (SSRIs, SNRIs, TCAs, vilazodone, vortioxetine), norepinephrine reuptake inhibition (NRIs, SNRIs, TCAs), dopamine reuptake inhibition (bupropion, amineptine, nomifensine), direct modulation of monoamine receptors (vilazodone, vortioxetine, SARIs, agomelatine, TCAs, TeCAs, antipsychotics), monoamine oxidase inhibition (MAOIs), and NMDA receptor antagonism (ketamine, esketamine, dextromethorphan), among others (e.g., brexanolone, tianeptine). Some antidepressants also have additional actions, like sigma receptor modulation (certain SSRIs, TCAs, dextromethorphan) and antagonism of histamine H1 and muscarinic acetylcholine receptors (TCAs, TeCAs). The earliest and most widely known scientific theory of antidepressant action is the monoamine hypothesis, which can be traced back to the 1950s and 1960s. This theory states that depression is due to an imbalance, most often a deficiency, of the monoamine neurotransmitters, namely serotonin, norepinephrine, and/or dopamine. However, serotonin in particular has been implicated, as in the serotonin hypothesis of depression. The monoamine hypothesis was originally proposed based on observations that reserpine, a drug which depletes the monoamine neurotransmitters, produced depressive effects in people, and that certain hydrazine antituberculosis agents like iproniazid, which prevent the breakdown of monoamine neurotransmitters, produced apparent antidepressant effects. Most currently marketed antidepressants, which are monoaminergic in their actions, are theoretically consistent with the monoamine hypothesis. Despite the widespread nature of the monoamine hypothesis, it has a number of limitations: for one, all monoaminergic antidepressants have a delayed onset of action of at least a week; and secondly, many people with depression do not respond to monoaminergic antidepressants. A number of alternative hypotheses have been proposed, including hypotheses involving glutamate, neurogenesis, epigenetics, cortisol hypersecretion, and inflammation, among others. In 2022, a major systematic umbrella review by Joanna Moncrieff and colleagues showed that the serotonin theory of depression was not supported by evidence from a wide variety of areas. The authors concluded that there is no association between serotonin and depression, and that there is no evidence that strongly supports the theory that depression is caused by low serotonin activity or concentrations. Other literature had described the lack of support for the theory previously. In many of the expert responses to the review, it was stated that the monoamine hypothesis had already long been abandoned by psychiatry. This is in spite of about 90% of the general public in Western countries believing the theory to be true and many in the field of psychiatry continuing to promote the theory up to recent times. In addition to this review, a 2003 literature review and a 2022 systematic review, both of reserpine and mood, found that there is no consistent evidence that reserpine produces depressive effects. Instead, the results were highly mixed, with similar proportions of studies finding that reserpine had no influence on mood, produced depressogenic effects, or had antidepressant effects. In relation to this, the general monoamine hypothesis, as opposed to just the serotonin theory of depression, is likewise not well-supported by evidence. The serotonin and monoamine hypotheses of depression have been heavily promoted by the pharmaceutical industry (e.g., in advertisements) and by the psychiatric profession at large despite the lack of evidence in support of them. In the case of the pharmaceutical industry, this can be attributed to obvious financial incentives, with the theory creating a bias against non-pharmacological treatments for depression. An alternative theory for antidepressant action proposed by certain academics such as Irving Kirsch and Joanna Moncrieff is that they work largely or entirely via placebo mechanisms. This is supported by meta-analyses of randomized controlled trials of antidepressants for depression, which consistently show that placebo groups in trials improve about 80 to 90% as much as antidepressant groups on average and that antidepressants are only marginally more effective for depression than placebos. The difference between antidepressants and placebo corresponds to an effect size (SMD) of about 0.3, which in turn equates to about a 2- to 3-point additional improvement on the 0–52-point (HRSD) and 0–60-point (MADRS) depression rating scales used in trials. Differences in effectiveness between different antidepressants are small and not clinically meaningful. The small advantage of antidepressants over placebo is often statistically significant and is the basis for their regulatory approval, but is sufficiently modest that its clinical significance is doubtful. Moreover, the small advantage of antidepressants over placebo may simply be a methodological artifact caused by unblinding due to the psychoactive effects and side effects of antidepressants, in turn resulting in enhanced placebo effects and apparent antidepressant efficacy. Placebos are not purely psychological phenomenon, but have been found to modify the activity of several brain regions and to increase levels of dopamine and endogenous opioids in the reward pathways. It has been argued by Kirsch that although antidepressants may be used efficaciously for depression as active placebos, they are limited by significant pharmacological side effects and risks, and therefore non-pharmacological therapies, such as psychotherapy and lifestyle changes, which can have similar efficacy to antidepressants but do not have their adverse effects, ought to be preferred as treatments in people with depression. The placebo response, or the improvement in scores in the placebo group in clinical trials, is not only due to the placebo effect, but is also due to other phenomena such as spontaneous remission and regression to the mean. Depression tends to have an episodic course, with people eventually recovering even with no medical intervention, and people tend to seek treatment, as well as enroll in clinical trials, when they are feeling their worst. In meta-analyses of trials of depression therapies, Kirsch estimated based on improvement in untreated waiting-list controls that spontaneous remission and regression to the mean only account for about 25% of the improvement in depression scores with antidepressant therapy. However, another academic, Michael P. Hengartner, has argued and presented evidence that spontaneous remission and regression to the mean might actually account for most of the improvement in depression scores with antidepressants, and that the substantial placebo effect observed in clinical trials might largely be a methodological artifact. This suggests that antidepressants may be associated with much less genuine treatment benefit, whether due to the placebo effect or to the antidepressant itself, than has been traditionally assumed. Types. Selective serotonin reuptake inhibitors. Selective serotonin reuptake inhibitors (SSRIs) are believed to increase the extracellular level of the neurotransmitter serotonin by limiting its reabsorption into the presynaptic cell, increasing the level of serotonin in the synaptic cleft available to bind to the postsynaptic receptor. They have varying degrees of selectivity for the other monoamine transporters, with pure SSRIs having only weak affinity for the norepinephrine and dopamine transporters. SSRIs are the most widely prescribed antidepressants in many countries. The efficacy of SSRIs in mild or moderate cases of depression has been disputed. Serotonin–norepinephrine reuptake inhibitors. Serotonin–norepinephrine reuptake inhibitors (SNRIs) are potent inhibitors of the reuptake of serotonin and norepinephrine. These neurotransmitters are known to play an important role in mood. SNRIs can be contrasted with the more widely used selective serotonin reuptake inhibitors (SSRIs), which act mostly upon serotonin alone. The human serotonin transporter (SERT) and norepinephrine transporter (NET) are membrane proteins that are responsible for the reuptake of serotonin and norepinephrine. Balanced dual inhibition of monoamine reuptake may offer advantages over other antidepressants drugs by treating a wider range of symptoms. SNRIs are sometimes also used to treat anxiety disorders, obsessive–compulsive disorder (OCD), attention deficit hyperactivity disorder (ADHD), chronic neuropathic pain, and fibromyalgia syndrome (FMS), and for the relief of menopausal symptoms. Serotonin modulators and stimulators. Serotonin modulator and stimulators (SMSs), sometimes referred to more simply as "serotonin modulators", are a type of drug with a multimodal action specific to the serotonin neurotransmitter system. To be precise, SMSs simultaneously modulate one or more serotonin receptors and inhibit the reuptake of serotonin. The term was coined in reference to the mechanism of action of the serotonergic antidepressant Vortioxetine, which acts as a serotonin reuptake inhibitor (SRI), a partial agonist of the 5-HT1A receptor, and antagonist of the 5-HT3 and 5-HT7 receptors. However, it can also technically be applied to Vilazodone, which is an antidepressant as well and acts as an SRI and 5-HT1A receptor partial agonist. An alternative term is serotonin partial agonist/reuptake inhibitor (SPARI), which can be applied only to Vilazodone. Serotonin antagonists and reuptake inhibitors. Serotonin antagonist and reuptake inhibitors (SARIs) while mainly used as antidepressants are also anxiolytics and hypnotics. They act by antagonizing serotonin receptors such as 5-HT2A and inhibiting the reuptake of serotonin, norepinephrine, and/or dopamine. Additionally, most also act as α1-adrenergic receptor antagonists. The majority of the currently marketed SARIs belong to the phenylpiperazine class of compounds. They include Trazodone and Nefazodone. Tricyclic antidepressants. The majority of the tricyclic antidepressants (TCAs) act primarily as serotonin–norepinephrine reuptake inhibitors (SNRIs) by blocking the serotonin transporter (SERT) and the norepinephrine transporter (NET), respectively, which results in an elevation of the synaptic concentrations of these neurotransmitters, and therefore an enhancement of neurotransmission. Notably, with the sole exception of amineptine, the TCAs have weak affinity for the dopamine transporter (DAT), and therefore have low efficacy as dopamine reuptake inhibitors (DRIs). Although TCAs are sometimes prescribed for depressive disorders, they have been largely replaced in clinical use in most parts of the world by newer antidepressants such as selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), and norepinephrine reuptake inhibitors (NRIs). Adverse effects have been found to be of a similar level between TCAs and SSRIs. Tetracyclic antidepressants. Tetracyclic antidepressants (TeCAs) are a class of antidepressants that were first introduced in the 1970s. They are named after their chemical structure, which contains four rings of atoms, and are closely related to tricyclic antidepressants (TCAs), which contain three rings of atoms. Monoamine oxidase inhibitors. Monoamine oxidase inhibitors (MAOIs) are chemicals that inhibit the activity of the monoamine oxidase enzyme family. They have a long history of use as medications prescribed for the treatment of depression. They are particularly effective in treating atypical depression. They are also used in the treatment of Parkinson's disease and several other disorders. Because of potentially lethal dietary and drug interactions, MAOIs have historically been reserved as a last line of treatment, used only when other classes of antidepressant drugs (for example selective serotonin reuptake inhibitors and tricyclic antidepressants) have failed. MAOIs have been found to be effective in the treatment of panic disorder with agoraphobia, social phobia, atypical depression or mixed anxiety and depression, bulimia, and post-traumatic stress disorder, as well as borderline personality disorder. MAOIs appear to be particularly effective in the management of bipolar depression according to a retrospective-analysis. There are reports of MAOI efficacy in obsessive–compulsive disorder (OCD), trichotillomania, dysmorphophobia, and avoidant personality disorder, but these reports are from uncontrolled case reports. MAOIs can also be used in the treatment of Parkinson's disease by targeting MAO-B in particular (therefore affecting dopaminergic neurons), as well as providing an alternative for migraine prophylaxis. Inhibition of both MAO-A and MAO-B is used in the treatment of clinical depression and anxiety disorders. NMDA receptor antagonists. NMDA receptor antagonists like Ketamine and Esketamine are rapid-acting antidepressants and seem to work via blockade of the ionotropic glutamate NMDA receptor. Others. See the list of antidepressants and management of depression for other drugs that are not specifically characterized. Adjuncts. Adjunct medications are an umbrella category of substances that increase the potency or "enhance" antidepressants. They work by affecting variables very close to the antidepressant, sometimes affecting a completely different mechanism of action. This may be attempted when depression treatments have not been successful in the past. Common types of adjunct medication techniques generally fall into the following categories: It is unknown if undergoing psychological therapy at the same time as taking anti-depressants enhances the anti-depressive effect of the medication. Less common adjuncts. Lithium has been used to augment antidepressant therapy in those who have failed to respond to antidepressants alone. Furthermore, Lithium dramatically decreases the suicide risk in recurrent depression. There is some evidence for the addition of a thyroid hormone, triiodothyronine, in patients with normal thyroid function. Psychopharmacologists have also tried adding a stimulant, in particular, D-amphetamine. However, the use of stimulants in cases of treatment-resistant depression is relatively controversial. A review article published in 2007 found psychostimulants may be effective in treatment-resistant depression with concomitant antidepressant therapy, but a more certain conclusion could not be drawn due to substantial deficiencies in the studies available for consideration, and the somewhat contradictory nature of their results. History. Before the 1950s, opioids and amphetamines were commonly used as antidepressants. Their use was later restricted due to their addictive nature and side effects. Extracts from the herb St John's wort have been used as a "nerve tonic" to alleviate depression. St John's wort fell out of favor in most countries through the 19th and 20th centuries, except in Germany, where Hypericum extracts were eventually licensed, packaged, and prescribed. Small-scale efficacy trials were carried out in the 1970s and 1980s, and attention grew in the 1990s following a meta-analysis. It remains an over-the-counter drug (OTC) supplement in most countries. Lead contamination associated with its usage has been seen as concerning, as lead levels in women in the United States taking St. John's wort are elevated by about 20% on average. Research continues to investigate its active component hyperforin, and to further understand its mode of action. Isoniazid, iproniazid, and imipramine. In 1951, Irving Selikoff and Edward H. Robitzek, working out of Sea View Hospital on Staten Island, began clinical trials on two new anti-tuberculosis agents developed by Hoffman-LaRoche, Isoniazid, and Iproniazid. Only patients with a poor prognosis were initially treated. Nevertheless, their condition improved dramatically. Selikoff and Robitzek noted "a subtle general stimulation ... the patients exhibited renewed vigor and indeed this occasionally served to introduce disciplinary problems." The promise of a cure for tuberculosis in the Sea View Hospital trials was excitedly discussed in the mainstream press. In 1952, learning of the stimulating side effects of Isoniazid, the Cincinnati psychiatrist Max Lurie tried it on his patients. In the following year, he and Harry Salzer reported that Isoniazid improved depression in two-thirds of their patients, so they then coined the term "antidepressant" to refer to its action. A similar incident took place in Paris, where Jean Delay, head of psychiatry at Sainte-Anne Hospital, heard of this effect from his pulmonology colleagues at Cochin Hospital. In 1952 (before Lurie and Salzer), Delay, with the resident Jean-Francois Buisson, reported the positive effect of isoniazid on depressed patients. The mode of antidepressant action of isoniazid is still unclear. It is speculated that its effect is due to the inhibition of Diamine Oxidase, coupled with a weak inhibition of Monoamine Oxidase A. Selikoff and Robitzek also experimented with another anti-tuberculosis drug, Iproniazid; it showed a greater psychostimulant effect, but more pronounced toxicity. Later, Jackson Smith, Gordon Kamman, George E. Crane, and Frank Ayd, described the psychiatric applications of Iproniazid. Ernst Zeller found Iproniazid to be a potent Monoamine oxidase inhibitor. Nevertheless, Iproniazid remained relatively obscure until Nathan S. Kline, the influential head of research at Rockland State Hospital, began to popularize it in the medical and popular press as a "psychic energizer". Roche put a significant marketing effort behind Iproniazid. Its sales grew until it was recalled in 1961, due to reports of lethal hepatotoxicity. The antidepressant effect of a Tricyclic, a three-ringed compound, was first discovered in 1957 by Roland Kuhn in a Swiss psychiatric hospital. Antihistamine derivatives were used to treat surgical shock and later as neuroleptics. Although in 1955, Reserpine was shown to be more effective than a placebo in alleviating anxious depression, neuroleptics were being developed as sedatives and antipsychotics. Attempting to improve the effectiveness of Chlorpromazine, Kuhn in conjunction with the Geigy Pharmaceutical Company discovered the compound "G 22355", later renamed Imipramine. Imipramine had a beneficial effect on patients with depression who showed mental and motor retardation. Kuhn described his new compound as a "thymoleptic" "taking hold of the emotions," in contrast with neuroleptics, "taking hold of the nerves" in 1955–56. These gradually became established, resulting in the patent and manufacture in the US in 1951 by Häfliger and SchinderA. Antidepressants became prescription drugs in the 1950s. It was estimated that no more than fifty to one hundred individuals per million had the kind of depression that these new drugs would treat, and pharmaceutical companies were not enthusiastic about marketing for this small market. Sales through the 1960s remained poor compared to the sales of tranquilizers, which were being marketed for different uses. Imipramine remained in common use and numerous successors were introduced. The use of monoamine oxidase inhibitors (MAOI) increased after the development and introduction of "reversible" forms affecting only the MAO-A subtype of inhibitors, making this drug safer to use. By the 1960s, it was thought that the mode of action of Tricyclics was to inhibit norepinephrine reuptake. However, norepinephrine reuptake became associated with stimulating effects. Later Tricyclics were thought to affect serotonin as proposed in 1969 by Carlsson and Lindqvist as well as Lapin and Oxenkrug. Second-generation antidepressants. Researchers began a process of rational drug design to isolate antihistamine-derived compounds that would selectively target these systems. The first such compound to be patented was Zimelidine in 1971, while the first released clinically was Indalpine. Fluoxetine was approved for commercial use by the US Food and Drug Administration (FDA) in 1988, becoming the first blockbuster SSRI. Fluoxetine was developed at Eli Lilly and Company in the early 1970s by Bryan Molloy, Klaus Schmiegel, David T. Wong, and others. SSRIs became known as "novel antidepressants" along with other newer drugs such as SNRIs and NRIs with various selective effects. Rapid-acting antidepressants. Esketamine (brand name Spravato), the first rapid-acting antidepressant to be approved for clinical treatment of depression, was introduced for this indication in March 2019 in the United States. Research. A 2016 randomized controlled trial evaluated the rapid antidepressant effects of the psychedelic Ayahuasca in treatment-resistant depression with a positive outcome. In 2018, the FDA granted Breakthrough Therapy Designation for psilocybin-assisted therapy for treatment-resistant depression and in 2019, the FDA granted Breakthrough Therapy Designation for psilocybin therapy treating major depressive disorder. Publication bias and aged research. A 2018 systematic review published in The Lancet comparing the efficacy of 21 different first and second generation antidepressants found that antidepressant drugs tended to perform better and cause less adverse events when they were novel or experimental treatments compared to when they were evaluated again years later. Unpublished data was also associated with smaller positive effect sizes. However, the review did not find evidence of bias associated with industry funded research. Society and culture. Prescription trends. United Kingdom. In the UK, figures reported in 2010 indicated that the number of antidepressants prescribed by the National Health Service (NHS) almost doubled over a decade. Further analysis published in 2014 showed that number of antidepressants dispensed annually in the community went up by 25 million in the 14 years between 1998 and 2012, rising from 15 million to 40 million. Nearly 50% of this rise occurred in the four years after the 2008 banking crash, during which time the annual increase in prescriptions rose from 6.7% to 8.5%. These sources also suggest that aside from the recession, other factors that may influence changes in prescribing rates may include: improvements in diagnosis, a reduction of the stigma surrounding mental health, broader prescribing trends, GP characteristics, geographical location, and housing status. Another factor that may contribute to increasing consumption of antidepressants is the fact that these medications now are used for other conditions including social anxiety and post-traumatic stress disorder. Between 2005 and 2017, the number of adolescents (12 to 17 years) in England who were prescribed antidepressants has doubled. On the other hand, antidepressant prescriptions for children aged 5–11 in England decreased between 1999 and 2017. From April 2015, prescriptions increased for both age groups (for people aged 0 to 17) and peaked during the first COVID lockdown in March 2020. According to National Institute for Health and Care Excellence (NICE) guidelines, antidepressants for children and adolescents with depression and obsessive-compulsive disorder (OCD) should be prescribed together with therapy and after being assessed by a child and adolescent psychiatrist. However, between 2006 and 2017, only 1 in 4 of 12–17 year-olds who were prescribed an SSRI by their GP had seen a specialist psychiatrist and 1 in 6 has seen a pediatrician. Half of these prescriptions were for depression and 16% for anxiety, the latter not being licensed for treatment with antidepressants. Among the suggested possible reasons why GPs are not following the guidelines are the difficulties of accessing talking therapies, long waiting lists, and the urgency of treatment. According to some researchers, strict adherence to treatment guidelines would limit access to effective medication for young people with mental health problems. United States. In the United States, antidepressants were the most commonly prescribed medication in 2013. Of the estimated 16 million "long term" (over 24 months) users, roughly 70 percent are female. , about 16.5% of white people in the United States took antidepressants compared with 5.6% of black people in the United States. United States: The most commonly prescribed antidepressants in the US retail market in 2010 were: Netherlands: In the Netherlands, paroxetine is the most prescribed antidepressant, followed by amitriptyline, citalopram and venlafaxine. Adherence. , worldwide, 30% to 60% of people did not follow their practitioner's instructions about taking their antidepressants, and in the US, it appeared that around 50% of people did not take their antidepressants as directed by their practitioner. When people fail to take their antidepressants, there is a greater risk that the drug will not help, that symptoms get worse, that they miss work or are less productive at work, and that the person may be hospitalized. Social science perspective. Some academics have highlighted the need to examine the use of antidepressants and other medical treatments in cross-cultural terms, because various cultures prescribe and observe different manifestations, symptoms, meanings, and associations of depression and other medical conditions within their populations. These cross-cultural discrepancies, it has been argued, then have implications on the perceived efficacy and use of antidepressants and other strategies in the treatment of depression in these different cultures. In India, antidepressants are largely seen as tools to combat marginality, promising the individual the ability to reintegrate into society through their use—a view and association not observed in the West. Environmental impacts. Because most antidepressants function by inhibiting the reuptake of neurotransmitters serotonin, dopamine, and norepinephrine these drugs can interfere with natural neurotransmitter levels in other organisms impacted by indirect exposure. Antidepressants fluoxetine and sertraline have been detected in aquatic organisms residing in effluent-dominated streams. The presence of antidepressants in surface waters and aquatic organisms has caused concern because ecotoxicological effects on aquatic organisms due to fluoxetine exposure have been demonstrated. Coral reef fish have been demonstrated to modulate aggressive behavior through serotonin. Artificially increasing serotonin levels in crustaceans can temporarily reverse social status and turn subordinates into aggressive and territorial dominant males. Exposure to Fluoxetine has been demonstrated to increase serotonergic activity in fish, subsequently reducing aggressive behavior. Perinatal exposure to Fluoxetine at relevant environmental concentrations has been shown to lead to significant modifications of memory processing in 1-month-old cuttlefish. This impairment may disadvantage cuttlefish and decrease their survival. Somewhat less than 10% of orally administered Fluoxetine is excreted from humans unchanged or as glucuronide.
2389
Auger effect
The Auger effect or Auger−Meitner effect is a physical phenomenon in which the filling of an inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom. When a core electron is removed, leaving a vacancy, an electron from a higher energy level may fall into the vacancy, resulting in a release of energy. Although most often this energy is released in the form of an emitted photon, the energy can also be transferred to another electron, which is ejected from the atom; this second ejected electron is called an Auger electron. Effect. Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected. These energy levels depend on the type of atom and the chemical environment in which the atom was located. Auger electron spectroscopy involves the emission of Auger electrons by bombarding a sample with either X-rays or energetic electrons and measures the intensity of Auger electrons that result as a function of the Auger electron energy. The resulting spectra can be used to determine the identity of the emitting atoms and some information about their environment. Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole (electron-hole pair) can recombine giving up their energy to an electron in the conduction band, increasing its energy. The reverse effect is known as impact ionization. The Auger effect can impact biological molecules such as DNA. Following the K-shell ionization of the component atoms of DNA, Auger electrons are ejected leading to damage of its sugar-phosphate backbone. Discovery. The Auger emission process was observed and published in 1922 by Lise Meitner, an Austrian-Swedish physicist, as a side effect in her competitive search for the nuclear beta electrons with the British physicist Charles Drummond Ellis. The French physicist Pierre Victor Auger independently discovered it in 1923 upon analysis of a Wilson cloud chamber experiment and it became the central part of his PhD work. High-energy X-rays were applied to ionize gas particles and observe photoelectric electrons. The observation of electron tracks that were independent of the frequency of the incident photon suggested a mechanism for electron ionization that was caused from an internal conversion of energy from a radiationless transition. Further investigation, and theoretical work using elementary quantum mechanics and transition rate/transition probability calculations, showed that the effect was a radiationless effect more than an internal conversion effect.
2391
Akio Morita
was a Japanese entrepreneur and co-founder of Sony along with Masaru Ibuka. Early life. Akio Morita was born in Nagoya. Morita's family was involved in sake, miso and soy sauce production in the village of Kosugaya (currently a part of Tokoname City) on the western coast of Chita Peninsula in Aichi Prefecture since 1665. He was the oldest of four siblings and his father Kyuzaemon trained him as a child to take over the family business. Akio, however, found his true calling in mathematics and physics, and in 1944 he graduated from Osaka Imperial University with a degree in physics. He was later commissioned as a sub-lieutenant in the Imperial Japanese Navy, and served in World War II. During his service, Morita met his future business partner Masaru Ibuka in the Navy's Wartime Research Committee. Sony. In September 1945, Ibuka founded a radio repair shop in the bombed out Shirokiya Department Store in Nihonbashi, Tokyo. Morita saw a newspaper article about Ibuka's new venture and, after some correspondence, chose to join him in Tokyo. With funding from Morita's father, they co-founded "Tokyo Tsushin Kogyo Kabushiki Kaisha" (Tokyo Telecommunications Engineering Corporation, the forerunner of Sony Corporation) in 1946 with about 20 employees and initial capital of ¥190,000. In 1949, the company developed magnetic recording tape and, in 1950, sold the first tape recorder in Japan. Ibuka was instrumental in securing the licensing of transistor technology from Bell Labs to Sony in the 1950s, thus making Sony one of the first companies to apply transistor technology to non-military uses. In 1957, the company produced a pocket-sized radio (the first to be fully transistorized), and in 1958, Morita and Ibuka decided to rename their company Sony Corporation (derived from "sonus"—–Latin for "sound"—–and "sonny", a then-common American expression). Morita was an advocate for all the products made by Sony. However, since the radio was slightly too big to fit in a shirt pocket, Morita made his employees wear shirts with slightly larger pockets to give the radio a "pocket sized" appearance. Morita founded Sony Corporation of America (SONAM, currently abbreviated as SCA) in 1960. In the process, he was struck by the mobility of employees between American companies, which was unheard of in Japan at that time. When he returned to Japan, he encouraged experienced, middle-aged employees of other companies to reevaluate their careers and consider joining Sony. The company filled many positions in this manner, and inspired other Japanese companies to do the same. In 1961, Sony Corporation was the first Japanese company to be listed on the New York Stock Exchange, in the form of American depositary receipts (ADRs). In March 1968, Morita set up a joint venture in Japan between Sony and CBS Records, with him as president, to manufacture "software" for Sony's hardware. Morita became president of Sony in 1971, taking over from Ibuka who had served from 1950 to 1971. In 1975, Sony released the first Betamax home videocassette recorder, a year before the VHS format came out. Ibuka retired in 1976 and Morita was named chairman of the company. In 1979, the Walkman was introduced, making it one of the world's first portable music players and in 1982, Sony launched the world's first compact disc player, the Sony CDP-101, with a compact disc (CD) itself, a new data storage format Sony and Philips co-developed. In that year, a 3.5-inch floppy disk structure was introduced by Sony and it soon became the defacto standard. In 1984, Sony launched the Discman series which extended their Walkman brand to portable CD products. Under the vision of Morita, the company aggressively expanded into new businesses. Part of its motivation for doing so was the pursuit of "convergence", linking film, music and digital electronics. Twenty years after setting up a joint venture with CBS Records in Japan, Sony bought CBS Records Group which consisted of Columbia Records, Epic Records and other CBS labels. In 1989, they acquired Columbia Pictures Entertainment (Columbia Pictures, TriStar Pictures and others). Norio Ohga, who had joined the company in the 1950s after sending Morita a letter denouncing the poor quality of the company's tape recorders, succeeded Morita as chief executive officer in 1989. Morita suffered a cerebral hemorrhage in 1993 while playing tennis and on November 25, 1994, stepped down as Sony chairman to be succeeded by Ohga. Other affiliations. Morita was vice chairman of the Japan Business Federation (Japan Federation of Economic Organizations), and was a member of the Japan-U.S. Economic Relations Group, also known as the "Wise Men's Group". He helped General Motors with their acquisition of an interest in Isuzu Motors in 1972. He was the third Japanese chairman of the Trilateral Commission. His amateur radio call sign is JP1DPJ. Publications. In 1966, Morita wrote a book called "Gakureki Muyō Ron" (学歴無用論, Never Mind School Records), where he stresses that school records are not important to success or one's business skills. In 1986, Morita wrote an autobiography titled "Made in Japan". He co-authored the 1991 book "The Japan That Can Say No" with politician Shintaro Ishihara, where they criticized American business practices and encouraged Japanese to take a more independent role in business and foreign affairs. (Actually, Morita had no intention to criticize American practices at that time.) The book was translated into English and caused controversy in the United States, and Morita later had his chapters removed from the English version and distanced himself from the book. Awards and honours. In 1972, Morita received the Golden Plate Award of the American Academy of Achievement. Morita was awarded the Albert Medal by the United Kingdom's Royal Society of Arts in 1982, the first Japanese to receive the honor. Two years later, he received the prestigious Legion of Honour, and in 1991, was awarded the First Class Order of the Sacred Treasure from the Emperor of Japan. He was elected to the American Philosophical Society in 1992 and the American Academy of Arts and Sciences in 1993. That same year, he was awarded an honorary British knighthood (KBE). Morita received the International Distinguished Entrepreneur Award from the University of Manitoba in 1987. In 1998, he was the only Asian person on "Time" magazine's list of the 20 most influential business people of the 20th century as part of their . He was posthumously awarded the Grand Cordon of the Order of the Rising Sun in 1999. In 2003, Anaheim University's Graduate School of Business was renamed the Akio Morita School of Business in his honor. The Morita family's support for the program led to the growth of the Anaheim University Akio Morita School of Business in Tokyo, Japan. Death. Morita, who loved to play golf and tennis and to watch movies when rainy, suffered a stroke in 1993, during a game of tennis. The stroke weakened him and left him in a wheelchair. On November 25, 1994, he stepped down as Sony chairman. On October 3, 1999, Morita died of pneumonia at the age of 78 in a Tokyo hospital, where he had been admitted since August 1999. External links.
2392
Anode
An anode is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow out the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "-" (minus) is the anode. In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation. Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc. Charge flow. The terms anode and cathode are not defined by the voltage polarity of electrodes but the direction of current through the electrode. An anode is an electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode is an electrode through which conventional current flows out of the device. If the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed. Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode. The definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can emit electrons into the evacuated tube due to being heated by a filament, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode. Examples. The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power: In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards. In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging. In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal. In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube. Etymology. The word was coined in 1834 from the Greek ἄνοδος ("anodos"), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: ""ano" upwards, "odos" a way; the way which the sun rises". The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek "anodos", 'way up', 'the way (up) out of the cell (or other device) for electrons'. Electrolytic anode. In electrochemistry, the "anode" is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction). This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper. Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction. Battery or galvanic cell anode. In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Note: in a galvanic cell, contrary to what occurs in an electrolytic cell, no anions flow to the anode, the internal current being entirely accounted for by the cations flowing away from it (cf drawing). Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though technically incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles. Vacuum tube anode. In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons. Diode anode. In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage). Sacrificial anode. In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters. In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit. A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes. If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron. Impressed current anode. Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, and water heaters. Related antonym. The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
2393
Analog television
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent. Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television. All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning in the 2000s, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa and Asia. Development. The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely. Analog television did not really begin as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to trace lines across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also, far less maintenance was required of an all-electronic system compared to a mechanical spinning disc system. All-electronic systems became popular with households after World War II. Standards. Broadcasters of analog television encode their signal using different systems. The official systems of transmission were defined by the ITU in 1961 as: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, and so on. A color encoding scheme (NTSC, PAL, or SECAM) could be added to the base monochrome signal. Using RF modulation the signal is then modulated onto a very high frequency (VHF) or ultra high frequency (UHF) carrier wave. Each frame of a television image is composed of scan lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The process repeats and next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial television systems were black-and-white; the beginning of color television was in the 1950s. A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection. Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s were standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the chrominance information was added to the monochrome signals in a way that black and white televisions ignore. In this way backward compatibility was achieved. There are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC system. The European and Australian PAL and the French and former Soviet Union SECAM standards were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC. PAL had a late evolution called PALplus, allowing widescreen broadcasts while remaining fully compatible with existing PAL equipment. In principle, all three color encoding systems can be used with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it's necessary to quote the color system and the broadcast standard as a capital letter. For example, the United States, Canada, Mexico and South Korea use NTSC-M, Japan uses NTSC-J, the UK uses PAL-I, France uses SECAM-L, much of Western Europe and Australia use PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on. However, not all of these possible combinations actually exist. NTSC is currently only used with system M, even though there were experiments with NTSC-A (405 line) in the UK and NTSC-N (625 line) in part of South America. PAL is used with a variety of 625-line standards (B, G, D, K, I, N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards. For this reason, many people refer to any 625/25 type signal as "PAL" and to any 525/30 signal as "NTSC", even when referring to digital signals; for example, on DVD-Video, which does not contain any analog color encoding, and thus no PAL or NTSC signals at all. Although a number of different broadcast television systems were in use worldwide, the same principles of operation apply. Displaying an image. A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line, the beam returns to the start of the next line; at the end of the last line, the beam returns to the beginning of the first line at the top of the screen. As it passes each point, the intensity of the beam is varied, varying the luminance of that point. A color television system is similar except there are three beams that scan together and an additional signal known as chrominance controls the color of the spot. When analog television was developed, no affordable technology for storing video signals existed; the luminance signal had to be generated and transmitted at the same time at which it is displayed on the CRT. It was therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television. The physics of the CRT require that a finite time interval be allowed for the spot to move back to the start of the next line ("horizontal retrace") or the start of the screen ("vertical retrace"). The timing of the luminance signal must allow for this. The human eye has a characteristic called phi phenomenon. Quickly displaying successive scan images creates the illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is rapid on-screen motion occurring. The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video frames per second and further reduces flicker and other defects in transmission. Receiving signals. The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one carrier frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal. The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be nearly twice the video bandwidth if pure AM was used. Signal reception is invariably done via a superheterodyne receiver: the first stage is a "tuner" which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier performs amplification to the IF stages from the microvolt range to fractions of a volt. Extracting the sound. At this point the IF signal consists of a video carrier signal at one frequency and the sound carrier at a fixed offset in frequency. A demodulator recovers the video signal. Also at the output of the same demodulator is a new frequency modulated sound carrier at the offset frequency. In some sets made before 1948, this was filtered out, and the sound IF of about 22 MHz was sent to an FM demodulator to recover the basic sound signal. In newer sets, this new carrier at the offset frequency was allowed to remain as "intercarrier sound", and it was sent to an FM demodulator to recover the basic sound signal. One particular advantage of intercarrier sound is that when the front panel fine tuning knob is adjusted, the sound carrier frequency does not change with the tuning, but stays at the above-mentioned offset frequency. Consequently, it is easier to tune the picture without losing the sound. So the FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, television sound transmissions were monophonic. Structure of a video signal. The video carrier is demodulated to give a composite video signal containing luminance, chrominance and synchronization signals. The result is identical to the composite video format used by analog video devices such as VCRs or CCTV cameras. To ensure good linearity and thus fidelity, consistent with affordable manufacturing costs of transmitters and receivers, the video carrier is never modulated to the extent that it is shut off altogether. When intercarrier sound was introduced later in 1948, not completely shutting off the carrier had the side effect of allowing intercarrier sound to be economically implemented. Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC, and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the colorburst, and the chrominance signal) are not present. The "front porch" is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line's sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The "front porch" is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the "back porch". The "back porch" is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse. In color television systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system, it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference. In some professional systems, particularly satellite links between locations, the digital audio is embedded within the line sync pulses of the video signal, to save the cost of renting a second channel. The name for this proprietary system is Sound-in-Syncs. Monochrome video signal extraction. The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the "black" level. In the NTSC system, there is a "blanking" signal level used during the front porch and back porch, and a "black" signal level 75 mV above it; in PAL and SECAM these are identical. In a monochrome receiver, the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively. Color video signal extraction. U and V signals. A color signal conveys picture information for each of the red, green, and blue components of an image. However, these are not simply transmitted as three separate signals, because: such a signal would not be compatible with monochrome receivers, an important consideration when color broadcasting was first introduced. It would also occupy three times the bandwidth of existing television, requiring a decrease in the number of television channels available. Instead, the RGB signals are converted into YUV form, where the Y signal represents the luminance of the colors in the image. Because the rendering of colors in this way is the goal of both monochrome film and television systems, the Y signal is ideal for transmission as the luminance signal. This ensures a monochrome receiver will display a correct picture in black and white, where a given color is reproduced by a shade of gray that correctly reflects how light or dark the original color is. The U and V signals are "color difference" signals. The U signal is the difference between the B signal and the Y signal, also known as B minus Y (B-Y), and the V signal is the difference between the R signal and the Y signal, also known as R minus Y (R-Y). The U signal then represents how purplish-blue or its complementary color, yellowish-green, the color is, and the V signal how purplish-red or it's complementary, greenish-cyan, it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to detail in luminance than in color, the U and V signals can be transmitted with reduced bandwidth with acceptable results. In the receiver, a single demodulator can extract an additive combination of U plus V. An example is the X demodulator used in the X/Z demodulation system. In that same system, a second demodulator, the Z demodulator, also extracts an additive combination of U plus V, but in a different ratio. The X and Z color difference signals are further matrixed into three color difference signals, (R-Y), (B-Y), and (G-Y). The combinations of usually two, but sometimes three demodulators were: In the end, further matrixing of the above color-difference signals c through f yielded the three color-difference signals, (R-Y), (B-Y), and (G-Y). The R, G, and B signals in the receiver needed for the display device (CRT, Plasma display, or LCD display) are electronically derived by matrixing as follows: R is the additive combination of (R-Y) with Y, G is the additive combination of (G-Y) with Y, and B is the additive combination of (B-Y) with Y. All of this is accomplished electronically. It can be seen that in the combining process, the low-resolution portion of the Y signals cancel out, leaving R, G, and B signals able to render a low-resolution image in full color. However, the higher resolution portions of the Y signals do not cancel out, and so are equally present in R, G, and B, producing the higher-resolution image detail in monochrome, although it appears to the human eye as a full-color and full-resolution picture. NTSC and PAL systems. In the NTSC and PAL color systems, U and V are transmitted by using quadrature amplitude modulation of a subcarrier. This kind of modulation applies two independent signals to one subcarrier, with the idea that both signals will be recovered independently at the receiving end. For NTSC, the subcarrier is at 3.58 MHz. For the PAL system it is at 4.43 MHz. The subcarrier itself is not included in the modulated signal (suppressed carrier), it is the subcarrier sidebands that carry the U and V information. The usual reason for using suppressed carrier is that it saves on transmitter power. In this application a more important advantage is that the color signal disappears entirely in black and white scenes. The subcarrier is within the bandwidth of the main luminance signal and consequently can cause undesirable artifacts on the picture, all the more noticeable in black and white receivers. A small sample of the subcarrier, the colorburst, is included in the horizontal blanking portion, which is not visible on the screen. This is necessary to give the receiver a phase reference for the modulated signal. Under quadrature amplitude modulation the modulated chrominance signal changes phase as compared to its subcarrier and also changes amplitude. The chrominance amplitude (when considered together with the Y signal) represents the approximate saturation of a color, and the chrominance phase against the subcarrier reference approximately represents the hue of the color. For particular test colors found in the test color bar pattern, exact amplitudes and phases are sometimes defined for test and troubleshooting purposes only. Due to the nature of the quadrature amplitude modulation process that created the chrominance signal, at certain times, the signal represents only the U signal, and 70 nanoseconds (NTSC) later, it represents only the V signal. About 70 nanoseconds later still, -U, and another 70 nanoseconds, -V. So to extract U, a synchronous demodulator is utilized, which uses the subcarrier to briefly gate the chroma every 280 nanoseconds, so that the output is only a train of discrete pulses, each having an amplitude that is the same as the original U signal at the corresponding time. In effect, these pulses are discrete-time analog samples of the U signal. The pulses are then low-pass filtered so that the original analog continuous-time U signal is recovered. For V, a 90-degree shifted subcarrier briefly gates the chroma signal every 280 nanoseconds, and the rest of the process is identical to that used for the U signal. Gating at any other time than those times mentioned above will yield an additive mixture of any two of U, V, -U, or -V. One of these "off-axis" (that is, of the U and V axis) gating methods is called I/Q demodulation. Another much more popular off-axis scheme was the X/Z demodulation system. Further matrixing recovered the original U and V signals. This scheme was actually the most popular demodulator scheme throughout the 1960s. The above process uses the subcarrier. But as previously mentioned, it was deleted before transmission, and only the chroma is transmitted. Therefore, the receiver must reconstitute the subcarrier. For this purpose, a short burst of the subcarrier, known as the colorburst, is transmitted during the back porch (re-trace blanking period) of each scan line. A subcarrier oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, resulting in the oscillator producing the reconstituted subcarrier. NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal, caused sometimes by multipath, but mostly by poor implementation at the studio end. With the advent of solid-state receivers, cable TV, and digital studio equipment for conversion to an over-the-air analog signal, these NTSC problems have been largely fixed, leaving operator error at the studio end as the sole color rendition weakness of the NTSC system. In any case, the PAL D (delay) system mostly corrects these kinds of errors by reversing the phase of the signal on each successive line, and averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. Phase shift errors between successive lines are therefore canceled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined. NTSC is more spectrum efficient than PAL, giving more picture detail for a given bandwidth. This is because sophisticated comb filters in receivers are more effective with NTSC's 4 color frame sequence compared to PAL's 8-field sequence. However, in the end, the larger channel width of most PAL systems in Europe still gives PAL systems the edge in transmitting more picture detail. SECAM system. In the SECAM television system, U and V are transmitted on "alternate" lines, using simple frequency modulation of two different color subcarriers. In some analog color CRT displays, starting in 1956, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple CRT matrix mixing technique was replaced in later solid state designs of signal processing with the original matrixing method used in the 1954 and 1955 color TV receivers. Synchronization. Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen. A "sync separator" circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Horizontal synchronization. The horizontal sync pulse, separates the scan lines. The horizontal sync signal is a single short pulse that indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse. The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 μs pulse at 0 V. In the 625-line PAL system the pulse is 4.7 μs at 0 V. This is lower than the amplitude of any video signal ("blacker than black") so it can be detected by the level-sensitive "sync separator" circuit of the receiver. Two-timing intervals are defined – the "front porch" between the end of the displayed video and the start of the sync pulse, and the "back porch" after the sync pulse and before the displayed video. These and the sync pulse itself are called the "horizontal blanking" (or "retrace") "interval" and represent the time that the electron beam in the CRT is returning to the start of the next display line. Vertical synchronization. Vertical synchronization separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of horizontal sync pulses through almost the entire length of the scan line. The "vertical sync" signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through). The format of such a signal in 525-line NTSC is: Each pre- or post-equalizing pulse consists of half a scan line of black signal: 2 μs at 0 V, followed by 30 μs at 0.3 V. Each long sync pulse consists of an equalizing pulse with timings inverted: 30 μs at 0  V, followed by 2 μs at 0.3  V. In video production and computer graphics, changes to the image are often performed during the vertical blanking interval to avoid visible discontinuity of the image. If this image in the framebuffer is updated with a new image while the display is being refreshed, the display shows a mishmash of both frames, producing page tearing partway down the image. Horizontal and vertical hold. The sweep (or deflection) oscillators were designed to run without a signal from the television station (or VCR, computer, or other composite video source). This allows the television receiver to display a raster and to allow an image to be presented during antenna placement. With sufficient signal strength, the receiver's sync separator circuit would split timebase pulses from the incoming video and use them to reset the horizontal and vertical oscillators at the appropriate time to synchronize with the signal from the station. The free-running oscillation of the horizontal circuit is especially critical, as the horizontal deflection circuits typically power the flyback transformer (which provides acceleration potential for the CRT) as well as the filaments for the high voltage rectifier tube and sometimes the filament(s) of the CRT itself. Without the operation of the horizontal oscillator and output stages in these television receivers, there would be no illumination of the CRT's face. The lack of precision timing components in early equipment meant that the timebase circuits occasionally needed manual adjustment. If their free-run frequencies were too far from the actual line and field rates, the circuits would not be able to follow the incoming sync signals. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Older analog television receivers often provide manual controls to adjust horizontal and vertical timing. The adjustment takes the form of "horizontal hold" and "vertical hold" controls, usually on the front panel along with other common controls. These adjust the free-run frequencies of the corresponding timebase oscillators. A slowly rolling vertical picture demonstrates that the vertical oscillator is nearly synchronized with the television station but is not locking to it, often due to a weak signal or a failure in the sync separator stage not resetting the oscillator. Horizontal sync errors cause the image to be torn diagonally and repeated across the screen as if it were wrapped around a screw or a barber's pole; the greater the error, the more copies of the image will be seen at once wrapped around the barber pole. By the early 1980s the efficacy of the synchronization circuits, plus the inherent stability of the sets' oscillators, had been improved to the point where these controls were no longer necessary. Integrated Circuits which eliminated the horizontal hold control were starting to appear as early as 1969. The final generations of analog television receivers used IC-based designs where the receiver's timebases were derived from accurate crystal oscillators. With these sets, adjustment of the free-running frequency of either sweep oscillator was unnecessary and unavalable. Horizontal and vertical hold controls were rarely used in CRT-based computer monitors, as the quality and consistency of components were quite high by the advent of the computer age, but might be found on some composite monitors used with the 1970s–1980s home or personal computers. Other technical information. Components of a television system. A typical analog monochrome television receiver is based around the block diagram shown below: The tuner is the object which "plucks" the television signals out of the air, with the aid of an antenna. There are two types of tuners in analog television, VHF and UHF tuners. The VHF tuner selects the VHF television frequency. This consists of a 4 MHz video bandwidth and a 2  MHz audio bandwidth. It then amplifies the signal and converts it to a 45.75 MHz Intermediate Frequency (IF) amplitude-modulated picture and a 41.25 MHz IF frequency-modulated audio carrier. The IF amplifiers are centered at 44 MHz for optimal frequency transference of the audio and frequency carriers. What centers this frequency is the IF transformer. They are designed for a certain amount of bandwidth to encompass the audio and video. It depends on the number of stages (the amplifier between the transformers). Most of the early television sets (1939–45) used 4 stages with specially designed video amplifier tubes (the type 1852/6AC7). In 1946 the RCA presented a new innovation in television; the RCA 630TS. Instead of using the 1852 octal tube, it uses the 6AG5 7-pin miniature tube. It still had 4 stages, but it was 1/2 the size. Soon all of the manufactures followed RCA and designed better IF stages. They developed higher amplification tubes, and lower stage counts with more amplification. When the tube era came to an end in the mid-70s, they had shrunk the IF stages down to 1-2 (depending on the set) and with the same amplification as the 4 stage, 1852 tube sets. Like radio, television has Automatic Gain Control (AGC). This controls the gain of the IF amplifier stages and the tuner. More of this will be discussed below. The video amp and output amplifier consist of a low linear pentode or a high powered transistor. The video amp and output stage separate the 45.75 MHz from the 41.25 MHz. It simply uses a diode to detect the video signal. But the frequency-modulated audio is still in the video. Since the diode only detects AM signals, the FM audio signal is still in the video in the form of a 4.5 MHz signal. There are two ways to attach this problem, and both of them work. We can detect the signal before it enters into the video amplifier, or do it after the audio amplifier. Many television sets (1946 to late 1960s) used the after video amplification method, but of course, there is the occasional exception. Many of the later set late (1960s-now) use the before-the-video amplifier way. In some of the early television sets (1939–45) used its own separate tuner, so there was no need for a detection stage next to the amplifier. After the video detector, the video is amplified and sent to the sync separator and then to the picture tube. The audio signal is detected by a 4.5 MHz traps coil/transformer. After that, it then goes to a 4.5 MHz amplifier. This amplifier prepares the signal for the 4.5Mhz detector. It then goes through a 4.5 MHz IF transformer to the detector. In television, there are 2 ways of detecting FM signals. One way is by the ratio detector. This is simple but very hard to align. The next is a relatively simple detector. This is the quadrature detector. It was invented in 1954. The first tube designed for this purpose was the 6BN6 type. It is easy to align and simple in circuitry. It was such a good design that it is still being used today in the Integrated circuit form. After the detector, it goes to the audio amplifier. The next part is the sync separator/clipper. This also does more than what is in its name. It also forms the AGC voltage, as previously stated. This sync separator turns the video into a signal that the horizontal and vertical oscillators can use to keep in sync with the video. The horizontal and vertical oscillators form the raster on the CRT. They are kept in sync by the sync separator. There are many ways to create these oscillators. The first one is the earliest of its kind is the thyratron oscillator. Although it is known to drift, it makes a perfect sawtooth wave. This sawtooth wave is so good that no linearity control is needed. This oscillator was for the electrostatic deflection CRTs. It found some purpose for the electromagnetically deflected CRTs. The next oscillator is the blocking oscillator. It uses a transformer to create a sawtooth wave. This was only used for a brief time period and never was very popular after the beginning. The next oscillator is the multivibrator. This oscillator was probably the most successful. It needed more adjustment than the other oscillators, but it is very simple and effective. This oscillator was so popular that it was used from the early 1950s until today. The oscillator amplifier is sorted into two categories. The vertical amplifier directly drives the yoke. There is not much to this. It is similar to an audio amplifier. The horizontal oscillator is a different situation. The oscillator must supply the high voltage and the yoke power. This requires a high power flyback transformer, and a high powered tube or transistor. This is a problematic section for CRT televisions because it has to handle high power. Sync separator. Image synchronization is achieved by transmitting negative-going pulses; in a composite video signal of 1-volt amplitude, these are approximately 0.3  V below the "black level". The "horizontal sync" signal is a single short pulse which indicates the start of every line. Two-timing intervals are defined – the "front porch" between the end of the displayed video and the start of the sync pulse, and the "back porch" after the sync pulse and before the displayed video. These and the sync pulse itself are called the "horizontal blanking" (or "retrace") "interval" and represent the time that the electron beam in the CRT is returning to the start of the next display line. The "vertical sync" signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through). In the television receiver, a "sync separator" circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Counting sync pulses, a video line selector picks a selected line from a TV signal, used for teletext, on-screen displays, station identification logos as well as in the industry when cameras were used as a sensor. Timebase circuits. In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical "timebase" circuits (commonly called "sweep circuits" in the United States), each consisting of an oscillator and an amplifier. These generate modified sawtooth and parabola current waveforms to scan the electron beam in a linear way. The waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. The oscillators are designed to free-run at frequencies very close to the field and line rates, but the sync pulses cause them to reset at the beginning of each scan line or field, resulting in the necessary synchronization of the beam sweep with the originating signal. The output waveforms from the timebase amplifiers are fed to the horizontal and vertical "deflection coils" wrapped around the CRT tube. These coils produce magnetic fields proportional to the changing current, and these deflect the electron beam across the screen. In the 1950s, the power for these circuits was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier valve (tube) or semiconductor diode. This avoided the cost of a large high voltage mains supply (50 or 60 Hz) transformer. This type of circuit was used for the thermionic valve (vacuum tube) technology. It was inefficient and produced a lot of heat which led to premature failures in the circuitry. Although failure was common, it was easily repairable. In the 1960s, semiconductor technology was introduced into timebase circuits. During the late 1960s in the UK, synchronous (with the scan line rate) power generation was introduced into solid state receiver designs. These had very complex circuits in which faults were difficult to trace, but had very efficient use of power. In the early 1970s AC mains (50 or 60 Hz), and line timebase (15,625 Hz), thyristor based switching circuits were introduced. In the UK use of the simple (50  Hz) types of power, circuits were discontinued. The reason for design changes arose from the electricity supply contamination problems arising from EMI, and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform. CRT flyback power supply. Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10–30 kV) for correct operation. This voltage is not directly produced by the main power supply circuitry; instead, the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched through the line output transformer, and alternating current (AC) is induced into the scan coils. At the end of each horizontal scan line the magnetic field, which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this negative reverse emf. A small value capacitor is connected across the scan switching device. This tunes the circuit inductances to resonate at a much higher frequency. This slows down (lengthens) the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high voltage pulse to a Cockcroft–Walton generator design voltage multiplier. This produces the required EHT supply. A flyback converter is a power supply circuit operating on similar principles. A typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, (known as a diode split line output transformer or an Integrated High Voltage Transformer (IHVT)), so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well-insulated high voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used. Transition to digital. In many countries, over-the-air broadcast television of analog audio and analog video signals has been discontinued, to allow the re-use of the television broadcast radio spectrum for other services such as datacasting and subchannels. The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands; in 2007 by Finland, Andorra, Sweden and Switzerland; in 2008 by Belgium (Flanders) and Germany; in 2009 by the United States (high power stations), southern Canada, the Isle of Man, Norway, and Denmark. In 2010, Belgium (Wallonia), Spain, Wales, Latvia, Estonia, the Channel Islands, San Marino, Croatia, and Slovenia; in 2011 Israel, Austria, Monaco, Cyprus, Japan (excluding Miyagi, Iwate, and Fukushima prefectures), Malta and France; in 2012 the Czech Republic, Arab World, Taiwan, Portugal, Japan (including Miyagi, Iwate, and Fukushima prefectures), Serbia, Italy, Canada, Mauritius, the United Kingdom, the Republic of Ireland, Lithuania, Slovakia, Gibraltar, and South Korea; in 2013, the Republic of Macedonia, Poland, Bulgaria, Hungary, Australia, and New Zealand, completed the transition. The United Kingdom made the transition to digital television between 2008 and 2012, with the exception of Whitehaven, which made the switch over in 2007. The first digital TV-only area in the United Kingdom was Ferryside in Carmarthenshire. The Digital television transition in the United States for high-powered transmission was completed on 12 June 2009, the date that the Federal Communications Commission (FCC) set. Almost two million households could no longer watch television because they had not prepared for the transition. The switchover had been delayed by the DTV Delay Act. While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of television stations in the U.S.: low-power broadcasting stations, class A stations, and television translator stations. They were given later deadlines. In broadcasting, whatever happens in the United States also influences southern Canada and northern Mexico because those areas are covered by television stations in the U.S. In Japan, the switch to digital began in northeastern Ishikawa Prefecture on 24 July 2010 and ended in 43 of the country's 47 prefectures (including the rest of Ishikawa) on 24 July 2011, but in Fukushima, Iwate, and Miyagi prefectures, the conversion was delayed to 31 March 2012, due to complications from the 2011 Tōhoku earthquake and tsunami and its related nuclear accidents. In Canada, most of the larger cities turned off analog broadcasts on 31 August 2011. China had scheduled to end analog broadcasting between 2015 and 2018. Brazil switched to digital television on 2 December 2007 in its major cities. It is now estimated that Brazil will end analog broadcasting in 2023. In Malaysia, the Malaysian Communications & Multimedia Commission (MCMC) advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742  MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial transmission/television broadcast (DTTB) channel. Large portions of Malaysia are covered by television broadcasts from Singapore, Thailand, Brunei, and Indonesia (from Borneo and Batam). Starting from 1 November 2019, all regions in Malaysia were no longer using the analog system after the states of Sabah and Sarawak finally turned it off on 31 October 2019. In Singapore, digital television under DVB-T2 began on 16 December 2013. The switchover was delayed many times until analog TV was switched off at midnight on 2 January 2019. In the Philippines, the National Telecommunications Commission required all broadcasting companies to end analog broadcasting on 31 December 2015 at 11:59 p.m. Due to delay of the release of the implementing rules and regulations for digital television broadcast, the target date was moved to 2020. Full digital broadcast is expected in 2021 and all of the analog TV services should be shut down by the end of 2023. In the Russian Federation, the Russian Television and Radio Broadcasting Network (RTRS) disabled analog broadcasting of federal channels in five stages, shutting down broadcasting in multiple federal subjects at each stage. The first region to have analog broadcasting disabled was Tver Oblast on 3 December 2018, and the switchover was completed on 14 October 2019. During the transition, DVB-T2 receivers and monetary compensations for purchasing of terrestrial or satellite digital TV reception equipment were provided to disabled people, World War II veterans, certain categories of retirees and households with income per member below living wage.
2396
Adhesive
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation. The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, or welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by "reactive" or "non-reactive", a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin. Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared in approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the last century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present. History. Evidence of the earliest known use of adhesives was discovered in central Italy when two stone flakes partially covered with birch-bark tar and a third uncovered stone from the Middle Pleistocene era (circa 200,000 years ago) were found. This is thought to be the oldest discovered human use of tar-hafted stones. The birch-bark-tar adhesive is a simple, one-component adhesive. A study from 2019 showed that birch tar production can be a very simple process—merely involving the burning of birch bark near smooth vertical surfaces in open air conditions. Although sticky enough, plant-based adhesives are brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools. More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC. In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis. The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum. From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships. In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue. In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue. The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867. Natural rubber was first used as material for adhesives starting in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding. Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born. Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA). A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins. The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used. Types. Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase. By reactiveness. Non-reactive. Drying. There are two types of adhesives that harden by drying: "solvent-based adhesives" and "polymer dispersion adhesives", also known as "emulsion adhesives". Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the "drying adhesive" family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees. Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones. Pressure-sensitive. "Pressure-sensitive adhesives" (PSA) form a bond by the application of light pressure to marry the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength. PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days. Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and transdermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes. Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers. Contact. "Contact adhesives" are used in strong bonds with high shear-resistance like laminates, such as bonding Formica to a wooden counter, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization. Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. It is usually not necessary to apply pressure for a long time, so there is less need for clamps. Hot. "Hot adhesives", also known as "hot melt adhesives", are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies. Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. Reactive. Anaerobic. Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid. Multi-part. "Multi-component adhesives" harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies . There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are: The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process. Pre-mixed and frozen adhesives. "Pre-mixed and frozen adhesives" (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense. One-part. "One-part adhesives" harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture. "Ultraviolet" (UV) "light curing adhesives", also known as "light curing materials" (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based. "Heat curing adhesives" consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides. "Moisture curing adhesives" cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes. By origin. Natural. Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives. One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins. Synthetic. Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s. Application. Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles. Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns ("e.g.", caulk gun). All of these can be used manually or automated as part of a machine. Mechanisms of adhesion. For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered. Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs. In some cases, an actual chemical bond occurs between adhesive and substrate. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening. Methods to improve adhesion. The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming. Failure. There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following: Cohesive fracture. "Cohesive fracture" is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface". Adhesive fracture. "Adhesive fracture" (sometimes referred to as "interfacial fracture") is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness. Other types of fracture. Other types of fracture include: Design of adhesive joints. As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered. Failure will also very much depend on the opening "mode" of the joint. As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry. Increasing the joint resistance is usually obtained by designing its geometry so that: Shelf life. Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor.
2397
Anthony Hopkins
Sir Philip Anthony Hopkins (born 31 December 1937) is a Welsh actor, director, and producer. One of Britain's most recognisable and prolific actors, he is known for his performances on the screen and stage. Hopkins has received many awards and nominations during his career, including two Academy Awards, four BAFTA Awards, two Primetime Emmy Awards, and an Olivier Award. He has also received the Cecil B. DeMille Award in 2005 and the BAFTA Fellowship for lifetime achievement in 2008. He was knighted by Queen Elizabeth II for his services to drama in 1993. After graduating from the Royal Welsh College of Music & Drama in 1957, Hopkins trained at the Royal Academy of Dramatic Art in London. He was then spotted by Laurence Olivier who invited him to join the Royal National Theatre in 1965. Productions at the National included "King Lear" (his favourite Shakespeare play), "Coriolanus", "Macbeth", and "Antony and Cleopatra". In 1985, he received great acclaim and a Laurence Olivier Award for his performance in the David Hare play "Pravda". His last stage play was a West End production of "M. Butterfly" in 1989. Hopkins achieved recognition in film playing Richard the Lionheart in "The Lion in Winter" (1968), receiving a nomination for the BAFTA Award for Best Actor in a Supporting Role. During this period he starred in "A Bridge Too Far" (1977) and "The Elephant Man" (1980). He received two Academy Awards for Best Actor for "The Silence of the Lambs" (1991) and "The Father" (2020), becoming the oldest Best Actor Oscar winner to date. His other Oscar-nominated films include "The Remains of the Day" (1993), "Nixon" (1995), "Amistad" (1997) and "The Two Popes" (2019). Other notable films include "84 Charing Cross Road" (1987), "Howards End" (1992), "Bram Stoker's Dracula" (1992), "Shadowlands" (1993), "Legends of the Fall" (1994), "The Mask of Zorro" (1998), and MCU's "Thor" franchise (2011–2017). He reprised the role as Hannibal Lecter in "Hannibal" (2001) and "Red Dragon" (2002). Since making his television debut with the BBC in 1967, Hopkins has continued to appear on television. In 1973, he received a British Academy Television Award for Best Actor for his performance in "War and Peace". He received two Primetime Emmy Awards for Outstanding Actor in a Drama Series for "The Lindbergh Kidnapping Case" (1976) and "The Bunker" (1981). Other notable projects include the BBC film "The Dresser" (2015), PBS's "King Lear" (2018) and the HBO series "Westworld" (2016–2018), for which he received another Primetime Emmy nomination. Early life and education. Philip Anthony Hopkins was born in the Margam district of Port Talbot on 31 December 1937, the son of Annie Muriel (née Yeates) and baker Richard Arthur Hopkins. One of his grandfathers was from Wiltshire, England. He stated his father's working-class values have always underscored his life, "Whenever I get a feeling that I may be special or different, I think of my father and I remember his hands – his hardened, broken hands." His school days were unproductive; he would rather immerse himself in art, such as painting and drawing, or playing the piano than attend to his studies. In 1949, to instil discipline, his parents insisted he attend Jones' West Monmouth Boys' School in Pontypool. He remained there for five terms and was then educated at Cowbridge Grammar School in the Vale of Glamorgan. In an interview in 2002, he stated, "I was a poor learner, which left me open to ridicule and gave me an inferiority complex. I grew up absolutely convinced I was stupid." Hopkins was inspired by fellow Welsh actor Richard Burton, whom he met at the age of 15. He later called Burton "very gracious, very nice" but elaborated, "I don't know where everyone gets the idea we were good friends. I suppose it's because we are both Welsh and grew up near the same town. For the record, I didn't really know him at all." He enrolled at the Royal Welsh College of Music & Drama in Cardiff, from which he graduated in 1957. He next met Burton in 1975 as Burton prepared to take over Hopkins's role as the psychiatrist in Peter Shaffer's "Equus", with Hopkins stating, "He was a phenomenal actor. So was Peter O'Toole – they were wonderful, larger-than-life characters." After two years of his national service between 1958 and 1960, which he served in the British Army, Hopkins moved to London to study at the Royal Academy of Dramatic Art. Acting career. Theatre. 1960–1967: Theatre debut and Royal National Theatre. Hopkins made his first professional stage appearance in the Palace Theatre, Swansea, in 1960 with Swansea Little Theatre's production of "Have a Cigarette". In 1965, after several years in repertory, he was spotted by Laurence Olivier, who invited him to join the Royal National Theatre in London. Hopkins became Olivier's understudy, and filled in when Olivier was struck with appendicitis during a 1967 production of August Strindberg's "The Dance of Death". Olivier later noted in his memoir, "Confessions of an Actor", that Up until that night, Hopkins was always nervous prior to going on stage. This has since changed, and Hopkins quoted his mentor as saying: "He [Olivier] said: 'Remember: nerves is [sic] vanity – you're wondering what people think of you; to hell with them, just jump off the edge'. It was great advice." 1974–79: "Equus" and "The Tempest". In October 1974, Hopkins played the psychologist Dysart in the original Broadway production of Sir Peter Shaffer's play "Equus", starring opposite Peter Firth. For this performance, he received the Drama Desk Award for Outstanding Actor in a Play for the 1974–75 season. In 1979, Hopkins appeared as Prospero in a production of "The Tempest" held at the Mark Taper Forum in Los Angeles. 1983–1989: "Pravda" and "Antony and Cleopatra". In 1983, Hopkins also became a company member of The Mirror Theater Ltd's Repertory Company. In 1984, he portrayed Deeley in Harold Pinter's play "Old Times" at the Roundabout Theatre in New York. In 1985, Hopkins starred opposite Colin Firth in the Arthur Schnitzler play "The Lonely Road" at The Old Vic in London. That same year, he featured as Lambert Le Roux in the National Theatre production of "Pravda" in Sir David Hare and Howard Brenton's satirical play on the British newspaper industry in the Thatcher era. Receiving acclaim for his performance, Hopkins won the Laurence Olivier Award for Outstanding Achievement. Frank Rich in his "New York Times" review wrote, "Mr. Hopkins creates a memorable image of a perversely brilliant modern-day barbarian." In 1986, he starred in David Hare's production of "King Lear", Hopkins's favourite Shakespeare play, at the National Theatre. The next year, he starred as Antony in the National Theatre production of "Antony and Cleopatra" opposite Judi Dench, and in 1989, Hopkins made his last appearance on stage in a West End production of "M. Butterfly". "It was a torment", he claimed in a later interview. Of a matinee where nobody laughed, there was, he said "not a titter". When the lights came up, the cast realised the entire audience was Japanese. "Oh God," he recalled, "You'd go to your dressing room and someone would pop their head round the door and say, 'Coffee? Tea?' And I'd think, 'An open razor, please.'" Film. 1968–1978: Film debut and Attenborough collaborations. In 1968, Hopkins got his break in "The Lion in Winter" playing Richard the Lionheart, a performance which saw him nominated for the BAFTA Award for Best Actor in a Supporting Role. Making a name for himself as a screen actor, he appeared in Frank Pierson's neo-noir action thriller "The Looking Glass War" (1970), and Étienne Périer's "When Eight Bells Toll" (1971). The first of five collaborations with director Richard Attenborough, in 1972 Hopkins starred as British politician David Lloyd George in "Young Winston", and in 1977 he played British Army officer John Frost in Attenborough's World War II-set film "A Bridge Too Far". Hopkins starred in a film adaptation of the Henrik Ibsen play "A Doll's House" (1973) alongside Claire Bloom, Ralph Richardson, Denholm Elliott, and Edith Evans. He then appeared in the comedy "The Girl from Petrovka" (1974) with Goldie Hawn and Hal Holbrook and also starred in the Richard Lester suspense film "Juggernaut" opposite Richard Harris and Omar Sharif. In 1978 he starred in the sequel to "National Velvet" (1944), entitled "International Velvet" with Tatum O'Neal, Christopher Plummer, which was directed by Bryan Forbes. In 1978 he also starred in Attenborough's psychological horror film "Magic" about a demonic ventriloquist's puppet with Gene Siskel adding it as one of the best films of the year. 1980–1989: "The Elephant Man" and other roles. In 1980, he starred in David Lynch's "The Elephant Man" as the English doctor Sir Frederick Treves, who attends to Joseph Merrick (portrayed by John Hurt), a severely deformed man in 19th century London. The film received critical praise and attention from critics and received eight Academy Award nominations including for Best Picture. That year he also starred opposite Shirley MacLaine in "A Change of Seasons" and famously didn't get along with MacLaine, adding "she was the most obnoxious actress I have ever worked with." The film was an immense box office and critical failure. In 1984, he starred opposite Mel Gibson in "The Bounty" as William Bligh, captain of the Royal Navy ship , in a more accurate retelling of the mutiny on the "Bounty". 1990–1998: Hannibal Lecter and Merchant-Ivory films. Hopkins won acclaim among critics and audiences as the cannibalistic serial killer Hannibal Lecter in "The Silence of the Lambs", for which he won the Academy Award for Best Actor in 1991, with Jodie Foster as Clarice Starling, who also won for Best Actress. The film won Best Picture, Best Director and Best Adapted Screenplay, and Hopkins also picked up his first BAFTA for Best Actor. Hopkins reprised his role as Lecter twice; in Ridley Scott's "Hannibal" (2001), and "Red Dragon" (2002). His original portrayal of the character in "The Silence of the Lambs" has been labelled by the AFI as the number-one film villain. Director Jonathan Demme wanted a British actor for the role, with Jodie Foster stating, "Lecter is a manipulator and has a way of using language to keep people at bay. You wanted to see that Shakespearean monster." At the time he was offered the role, Hopkins was making a return to the London stage, performing in "M. Butterfly". He had come back to Britain after living for a number of years in Hollywood, having all but given up on a career there, saying, "Well that part of my life's over; it's a chapter closed. I suppose I'll just have to settle for being a respectable actor poncing around the West End and doing respectable BBC work for the rest of my life." Hopkins played the iconic villain in adaptations of the first three of the Lecter novels by Thomas Harris. The author was reportedly pleased with Hopkins's portrayal of his antagonist. However, Hopkins stated that "Red Dragon" would feature his final performance as the character and that he would not reprise even a narrative role in the latest addition to the series, "Hannibal Rising". In 1991, Hopkins also featured in Mark Joffe's film "Spotswood", and in 1992 he played Professor Van Helsing in Francis Ford Coppola's "Bram Stoker's Dracula" (1992). In 1992, Hopkins starred in Merchant-Ivory's period film based on the E. M. Forster novel "Howards End". Hopkins acted alongside Emma Thompson and Helena Bonham Carter where he played the cold businessman Henry Wilcox. The film received enormous critical acclaim, with critic Leonard Maltin calling it "extraordinarily good on every level." The following year, Hopkins reunited with Merchant-Ivory and Emma Thompson in "The Remains of the Day" (1993), a film set in 1950s post-war Britain based on the novel by Kazuo Ishiguro. The film was ranked by the British Film Institute as the 64th greatest British film of the 20th century. Starring as the butler Stevens, Hopkins named it among his favourite films. He was nominated for an Academy Award for Best Actor for his performance, and received the BAFTA Award for Best Actor. Hopkins portrayed Oxford academic C. S. Lewis in the 1993 British biographical film "Shadowlands", for which he was nominated for a BAFTA Award for Best Actor. During the 1990s, Hopkins had the chance to work with Bart the Bear in two films: "Legends of the Fall" (1994) and "The Edge" (1997). According to trainer, Lynn Seus, "Tony Hopkins was absolutely brilliant with Bart...He acknowledged and respected him like a fellow actor. He would spend hours just looking at Bart and admiring him. He did so many of his own scenes with Bart." Hopkins was Britain's highest paid performer in 1998, starring in "The Mask of Zorro" and "Meet Joe Black", and also agreed to reprise his role as Dr Hannibal Lecter for a fee of £15 million. 2000–2009: Independent films and studio films. In 2000, Hopkins narrated Ron Howard's live action remake of "How the Grinch Stole Christmas". He then reprised the role of Hannibal Lecter in "The Silence of the Lambs" (1991) sequel simply entitled "Hannibal" (2001). Director Ridley Scott and actress Julianne Moore replaced Jonathan Demme and Jodie Foster who declined to participate in the sequel. Hopkins agreed to do the role approving of the script. In the book, Lecter uses bandages to disguise himself as a plastic surgery patient. This was left out of the film because Scott and Hopkins agreed to leave the face alone. Hopkins said: "It's as if he's making a statement—'catch me if you can'. With his big hat, he's so obvious that nobody thinks he's Hannibal Lecter. I've always thought he's a very elegant man, a Renaissance man." In the film, Lecter is first seen in Florence "as the classical Lecter, lecturing and being smooth", according to Hopkins. When the film moves to the U.S., Hopkins changed his appearance by building up muscle and cropping his hair short "to make him like a mercenary, that he would be so fit and so strong that he could just snap somebody in two if they got ... in his way". The film broke international box office records receiving $351 million. but received mixed reviews from critics. Hopkins starred in the third film in the series "Red Dragon" (2002) alongside Ralph Fiennes, Edward Norton, Harvey Keitel, Emily Watson, and Philip Seymour Hoffman. The film received favourable reviews and was a box office hit. In 2003, Hopkins received a star on the Hollywood Walk of Fame. Hopkins stated that his role as Burt Munro, whom he portrayed in his 2005 film "The World's Fastest Indian", was his favourite. He also asserted that Munro was the easiest role that he had played because both men have a similar outlook on life. In 2006, Hopkins was the recipient of the Golden Globe Cecil B. DeMille Award for lifetime achievement. In 2008, he received the BAFTA Academy Fellowship Award, the highest award the British Film Academy can bestow. In a 2003 poll conducted by Channel 4 Hopkins was ranked seventh on their list of the 100 Greatest Movie Stars. 2010–2017: "Thor" franchise and action films. On 24 February 2010, it was announced that Hopkins had been cast in "The Rite", which was released on 28 January 2011. He played a priest who is "an expert in exorcisms and whose methods are not necessarily traditional". Hopkins, an agnostic who is quoted as saying "I don't know what I believe, myself personally", reportedly wrote a line—"Some days I don't know if I believe in God or Santa Claus or Tinkerbell"—into his character to identify with it. In 2011, Hopkins has said, "what I enjoy is uncertainty. ... I don't know. You don't know." On 21 September 2011, Peter R. de Vries cast Hopkins in the role of the Heineken owner Freddy Heineken, in the film about his kidnapping. "Kidnapping Freddy Heineken", was released in 2015. Hopkins portrayed Odin, the Allfather or "king" of Asgard, in the 2011 film adaptation of Marvel Comics' "Thor" and would go on to reprise his role as Odin in ' in 2013, and again in 2017's '. Hopkins portrayed Alfred Hitchcock in Sacha Gervasi's biopic "Hitchcock" alongside Helen Mirren who played Hitchcock's wife, Alma Reville. The film focuses on the film of "Psycho" and that which followed. He starred in the comedy action film "Red 2" (2013) as the main antagonist Edward Bailey. In 2014, he portrayed Methuselah in Darren Aronofsky's "Noah". Hopkins played Autobot ally Sir Edmund Burton in "", which was released in June 2017. 2019–2021: Career resurgence and awards success. In 2019, Hopkins portrayed Pope Benedict XVI opposite Jonathan Pryce as Pope Francis in Fernando Meirelles's "The Two Popes". He stated, "The great treasure was working with – apart from [director] Meirelles – Pryce. We're both from Wales. He's from the north, and I'm from the south". The film is set in the Vatican City in the aftermath of the Vatican leaks scandal and follows Pope Benedict XVI as he attempts to convince Cardinal Jorge Mario Bergoglio to reconsider his decision to resign as an archbishop as he confides his own intentions to abdicate the papacy. In August 2019, the film premiered at the Telluride Film Festival to critical acclaim. The film started streaming on 20 December 2019, by Netflix. The performances of Pryce and Hopkins, as well as McCarten's screenplay, received high praise from critics, and all three men received nominations for their work at the Academy Awards, Golden Globes and British Academy Film Awards. In 2020, Hopkins played a man struggling with Alzheimer's disease in "The Father". The film premiered at the Sundance Film Festival where it received critical acclaim, with many critics praising Hopkins's performance and calling him a standout and Oscar frontrunner. The film also stars Olivia Colman as his daughter. It is based on a Tony Award nominated play "Le Père" by Florian Zeller, who also directed the film. "The Father" was released on 18 December 2020 by Sony Pictures Classics. In a Q&A at the Telluride Film Festival Hopkins praised both Colman and Zeller saying comparing the working experience saying it "might've been the highlight of my life". Hopkins mentioned how lucky he's been over the past five years working with Ian McKellen in "The Dresser", Emma Thompson in "King Lear", and Jonathan Pryce in "The Two Popes". Hopkins won the BAFTA Award for Best Actor in a Leading Role for his performance in "The Father", making it his fourth BAFTA and his third for Best Actor. He also won a second Academy Award for Best Actor for his role, becoming the oldest person to win an acting Oscar. Hopkins did not attend the Oscars ceremony, but accepted the award in a video posted on social media, from Wales, the following day, saying: "Here I am in my homeland in Wales. And at 83 years of age, I did not expect to get this award. I really didn't and am very grateful to the Academy and thank you." He also paid tribute to fellow nominee Chadwick Boseman, who had died the previous year. Television. 1967–1973: Television debut and Masterpiece theatre. He made his small-screen debut in a 1967 BBC broadcast of "A Flea in Her Ear". His first starring role in a film came in 1964 in "Changes", a short directed by Drewe Henley, written and produced by James Scott and co-starring Jacqueline Pearce. Hopkins portrayed Charles Dickens in the BBC television film "The Great Inimitable Mr. Dickens" in 1970, and Pierre Bezukhov in the BBC's mini series "War and Peace" (1972), receiving the British Academy Television Award for Best Actor for his performance in the latter. In 1973 he again portrayed David Lloyd George in the BBC miniseries "The Edwardians" which aired in the US in 1974 on "Masterpiece Theatre". 1981–1993: Miniseries and awards success. In 1981, he starred in the CBS television film "The Bunker" portraying Adolf Hitler during weeks in and around his underground bunker in Berlin before and during the Battle of Berlin. John O'Connor praised Hopkins in his "New York Times" review: "The portrait becomes all the more riveting through an extraordinarily powerful performance from Anthony Hopkins. His Hitler is mad, often contemptible, but always understandable. Part of the problem, perhaps, is that the monster becomes a little too understandable. He is not made sympathetic, exactly, but he is given decidedly pathetic dimensions, making him just that much more "acceptable" as a dramatic and historical character." For his performance he received a Primetime Emmy Award for Outstanding Lead Actor in a Limited Series or Movie. That same year he starred as Paul the Apostle opposite Robert Foxworth as Saint Peter in the biblical drama and miniseries "Peter and Paul" (1981). The following year he starred as Quasimodo in the CBS television film "The Hunchback of Notre Dame" (1982). The film also starred Derek Jacobi, David Suchet, Tim Pigott-Smith, Nigel Hawthorne, and John Gielgud. He also starred in "Strangers and Brothers" (1984), "Arch of Triumph" (1984), "Guilty Conscience" (1985), "Mussolini and I" (1985), and "The Tenth Man" (1988). In 1989 he starred as Abel Magwitch in the miniseries "Great Expectations" which was broadcast on ITV in the UK and The Disney Channel in the US. The adaptation of the Dickens' novel also starred Jean Simmons and John Rhys-Davies. He received his fourth Primetime Emmy Award nomination, this time for Outstanding Supporting Actor in a Limited Series or Movie. 2015–2018: "The Dresser", "Westworld" and "King Lear". In October 2015, Hopkins appeared as Sir in a BBC Two production of Ronald Harwood's "The Dresser", alongside Ian McKellen, Edward Fox and Emily Watson. "The Dresser" is set in a London theatre during the Blitz, where an aging actor-manager, Sir, prepares for his starring role in "King Lear" with the help of his devoted dresser, Norman. Hopkins described his role as Sir as "the highlight of my life. It was a chance to work with the actors I had run away from. To play another actor is fun because you know the ins and outs of their thinking – especially with someone like Sir, who is a diabolically insecure, egotistical man." He spoke again on the impact the role had on him in 2018, "When I was at the Royal National Theatre all those years ago, I knew I had something in me, but I didn't have the discipline. I had a Welsh temperament and didn't have that 'fitting in' mechanism. I would fight, I would rebel. I thought, 'Well, I don't belong here.' And for almost 50 years afterwards, I felt that edge of, 'I don't belong anywhere, I'm a loner.' But in "The Dresser", when Ian [McKellen] responded, it was wonderful. We got on so well and I suddenly felt at home, as though that lack of belonging was all in my imagination, all in my vanity". Beginning in October 2016, Hopkins starred as Robert Ford in the HBO sci-fi series "Westworld" where he received a Primetime Emmy Award nomination for his performance. Hopkins starred as Lear in the 2018 television film "King Lear" acting alongside Emma Thompson, Florence Pugh, and Jim Broadbent which was broadcast on BBC Two on 28 May 2018. Hopkins received a Screen Actors Guild Award nomination for his performance. "Vulture" stated the film "capture[d] the heart of the classic Shakespeare tragedy", and described Hopkins' performance as "devastating". Composing. Single. In a 2012 interview, Hopkins stated, "I've been composing music all my life and if I'd been clever enough at school I would like to have gone to music college. As it was I had to settle for being an actor." In 1986, he released a single called "Distant Star", which peaked at No. 75 in the UK Singles Chart. In 2007, he announced he would retire temporarily from the screen to tour around the world. Hopkins has also written music for the concert hall, in collaboration with Stephen Barton as orchestrator. These compositions include "The Masque of Time", given its world premiere with the Dallas Symphony Orchestra in October 2008, and "Schizoid Salsa". Albums. On 31 October 2011, André Rieu released an album including a waltz which Hopkins had composed in 1964, at the age of 26. Hopkins had never heard his composition, "And the Waltz Goes On", before it was premiered by Rieu's orchestra in Vienna; Rieu's album was given the same name as Hopkins's piece. In January 2012, Hopkins released an album of classical music, entitled "Composer", performed by the City of Birmingham Symphony Orchestra, and released on CD via the UK radio station Classic FM. The album consists of nine of his original works and film scores, with one of the pieces titled "Margam" in tribute to his home town near Port Talbot in Wales. Directing. In 1990, Hopkins directed a film about his Welsh compatriot, poet Dylan Thomas, titled "Dylan Thomas: Return Journey", which was his directing debut for the screen. In the same year, as part of the restoration process for the Stanley Kubrick film "Spartacus", Hopkins was approached to re-record lines from a scene that was being added back to the film; this scene featured Laurence Olivier and Tony Curtis, with Hopkins recommended by Olivier's widow, Joan Plowright to perform her late husband's part thanks to his talent for mimicry. In 1995, he directed "August", an adaptation of Chekhov's "Uncle Vanya" set in Wales. His first screenplay, an experimental drama called "Slipstream", which he also directed and scored, premiered at the Sundance Film Festival in 2007. In 1997, Hopkins narrated the BBC natural documentary series, "Killing for a Living", which showed predatory behaviour in nature. He narrated episode 1 through 3 before being replaced by John Shrapnel. Reception and acting style. Hopkins is renowned for his preparation for roles. He indicated in interviews that once he has committed to a project, he will go over his lines as many times as is needed (sometimes upwards of 200) until the lines sound natural to him, so that he can "do it without thinking". This leads to an almost casual style of delivery that belies the amount of groundwork done beforehand. While it can allow for some careful improvisation, it has also brought him into conflict with the occasional director who departs from the script, or demands what the actor views as an excessive number of takes. Hopkins has stated that after he is finished with a scene, he simply discards the lines, not remembering them later on. This is unlike others who usually remember their lines from a film, even years later. In the mid-1970s, he started a collaboration with Richard Attenborough who called him "the greatest actor of his generation". Attenborough who directed Hopkins on five occasions, found himself going to great lengths during the filming of "Shadowlands" (1993) to accommodate the differing approaches of his two stars (Hopkins and Debra Winger), who shared many scenes. Whereas Hopkins preferred the spontaneity of a fresh take and liked to keep rehearsals to a minimum, Winger rehearsed continuously. To allow for this, Attenborough stood in for Hopkins during Winger's rehearsals, only bringing him in for the last one before a take. The director praised Hopkins for "this extraordinary ability to make you believe when you hear him that it is the very first time he has ever said that line. It's an incredible gift." Renowned for his ability to remember lines, Hopkins keeps his memory supple by learning things by heart such as poetry and Shakespeare. In Steven Spielberg's "Amistad" (1997), Hopkins astounded the crew with his memorisation of a seven-page courtroom speech, delivering it in one go. An overawed Spielberg could not bring himself to call Hopkins "Tony", and insisted on addressing him as Sir Anthony throughout the shoot. In a 2016 interview with the "Radio Times", Hopkins spoke of his ability to frighten people since he was a boy growing up in Port Talbot, Wales. "I don't know why but I've always known what scares people. When I was a kid I'd tell the girls around the street the story about Dracula and I'd go 'th-th-th' (the sucking noise which he reproduced in "The Silence of the Lambs"). As a result, they'd run away screaming." He recalled going through the script of "Silence of the Lambs" for the first time with fellow cast members. "I didn't know what they were going to make of it but I'd prepared it—my first line to Jodie Foster was: 'Good morning. You're one of Jack Crawford's aren't you?' Everyone froze. There was a silence. Then one of the producers said, 'Holy crap, don't change a thing'." On Hopkins's approach to playing villains, Miranda Sawyer in "The Guardian" writes, "When he portrays deliberately scary people, he plays them quietly, emphasising their sinister control." Hopkins is a well-known mimic, adept at turning his native Welsh accent into whatever is required by a character. In the 1991 restoration of "Spartacus", he recreated the voice of his late mentor Laurence Olivier in a scene for which the soundtrack had been lost. His interview on the 1998 relaunch edition of the British television talk show "Parkinson" featured an impersonation of comedian Tommy Cooper. Hopkins has said acting "like a submarine" has helped him to deliver credible performances in his thrillers. He said, "It's very difficult for an actor to avoid, you want to show a bit. But I think the less one shows the better." Awards, honours and legacy. Hopkins was appointed a CBE in 1987 and was knighted by Queen Elizabeth II for "services to the arts" at Buckingham Palace in 1993. In 1988, he was awarded an honorary D.Litt. degree and in 1992 received an honorary fellowship from the University of Wales, Lampeter. He was made a freeman of his home town, Port Talbot, in 1996. Hopkins has also been honored with various lifetime achievement awards for his work in film and television. In 2006, Gwyneth Paltrow presented him with the Golden Globe Cecil B. DeMille Award. In 2008, Richard Attenborough presented Hopkins with the BAFTA Fellowship for lifetime achievement from the British Academy of Film and Television Arts. Hopkins has also received a star on the Hollywood Walk of Fame in 2003. In 2021, Hopkins won the Oscar for the Best Actor for "The Father". He became the oldest nominee and winner of the award. Personal life. Hopkins resides in Malibu, California. He had moved to the United States once before, during the late 1970s, to pursue his film career, but returned to London in the late 1980s. However, he decided to return to the US following his 1990s success. Retaining his British citizenship, he became a naturalised American citizen on 12 April 2000, with Hopkins stating: "I have dual citizenship; it just so happens I live in America". Hopkins has been married three times. He was married to actress Petronella Barker from 1966 to 1972, Jennifer Lynton from 1973 to 2002, and Stella Arroyave since 2003. Hopkins met Arroyave, a Colombian-born antiques dealer in the early 2000s, and he credits her with helping him overcome his feelings of depression at the time. On Christmas Eve 2013, he celebrated his 10th wedding anniversary by having a blessing at a private service at St Davids Cathedral in St Davids. He has a daughter from his first marriage. The two are estranged; when asked if he had any grandchildren, he said, "I don't have any idea. People break up. Families split and, you know, 'Get on with your life.' People make choices. I don't care one way or the other." Hopkins previously suffered from alcoholism; he has stayed sober since he stopped drinking just after Christmas 1975. He said, "I made that quantum leap when I asked for help. I just found something and a woman talked to me and she said, just trust in God. And I said, well, why not?" When asked, "Did you literally pray?" Hopkins responded: "No, I didn't. I think because I asked for help, which is a form of prayer." In January 2020, when asked if he was still agnostic, he responded, "Agnosticism is a bit strange. An agnostic doubts and atheism denies. I'm not a holy Joe; I'm just an old sinner like everyone else. I do believe more than ever now that there is a vast area of our own lives that we know nothing about. As I get older, I can cry at the drop of a hat because the wonderful, terrible passion of life is so short. I have to believe there's something bigger than me. I'm just a microbe. That, for me, is the biggest feeling of relief – acknowledging that I am really nothing. I'm compelled to say, whoever's running the show, thank you very much." Hopkins quit smoking using the Allen Carr method. In 2008, he embarked on a weight loss programme, and by 2010, he had lost 5st 10 lb (80 lb or 36 kg). In January 2017, in an interview with "The Desert Sun", Hopkins said that he had been diagnosed with Asperger syndrome three years earlier, but that he was "high end". Hopkins has a pet cat named Niblo, which he adopted in Budapest. Hopkins eschews meat and prefers a pescatarian diet. Hopkins is a fan of the BBC sitcom "Only Fools and Horses", and once remarked in an interview how he would love to appear in the series. Writer John Sullivan saw the interview, and with Hopkins in mind created the character Danny Driscoll, a local villain. However, filming of the new series coincided with the filming of "The Silence of the Lambs", making Hopkins unavailable. The role instead went to Roy Marsden. Philanthropy. Hopkins has offered his support to various charities and appeals, notably becoming President of the National Trust's Snowdonia Appeal, raising funds for the preservation of Snowdonia National Park in north Wales. In 1998 he donated £1 million towards the £3 million needed to aid the Trust's efforts in purchasing parts of Snowdon. Prior to the campaign, Hopkins wrote "Anthony Hopkins' Snowdonia", which was published in 1995. Due to his contributions to Snowdonia, in addition to his film career, in 2004 Hopkins was named among the 100 Welsh Heroes in a Welsh poll. Hopkins has been a patron of the YMCA centre in his home town of Port Talbot, South Wales, for more than 20 years, having first joined the YMCA in the 1950s. He supports other various philanthropic groups. He was a Guest of Honour at a Gala Fundraiser for Women in Recovery, Inc., a Venice, California-based non-profit organisation offering rehabilitation assistance to women in recovery from substance abuse. He is also a volunteer teacher at the Ruskin School of Acting in Santa Monica, California. Hopkins served as the Honorary Patron of The New Heritage Theatre Company in Boise, Idaho from 1997 to 2007, participating in fundraising and marketing efforts for the repertory theatre. Hopkins contributed toward the refurbishment of a £2.3 million wing at his alma mater, the Royal Welsh College of Music & Drama in Cardiff, named the Anthony Hopkins Centre. It opened in 1999. Hopkins is a prominent member of the environmental protection group Greenpeace and as of early 2008 featured in a television advertisement campaign, voicing concerns about whaling in Japan. He has also been a patron of RAPt (Rehabilitation for Addicted Prisoners Trust) since its early days and in 1992 helped open their first intensive drug and alcohol rehabilitation unit at Downview (HM Prison), a women's prison in Surrey, England. Hopkins is an admirer of the late Welsh comedian Tommy Cooper. On 23 February 2008, as patron of the Tommy Cooper Society, he unveiled a commemorative statue in the entertainer's home town of Caerphilly. For the ceremony, he donned Cooper's trademark fez and performed a comic routine.
2398
Ardal O'Hanlon
Ardal O'Hanlon () is an Irish comedian, actor, and author. He played Father Dougal McGuire in "Father Ted" (1995–1998), George Sunday/Thermoman in "My Hero" (2000–2006), and DI Jack Mooney in "Death in Paradise" (2017–2020). His novel "The Talk of the Town" was published in 1998. Early life. O'Hanlon was born in Carrickmacross, County Monaghan, the son of Fianna Fáil TD and physician Rory O'Hanlon and Teresa (née Ward). He is the third of six children, and has three brothers and two sisters. The episode of "Who Do You Think You Are?" which aired on 6 October 2008 revealed that O'Hanlon's paternal grandfather, Michael O'Hanlon, was a medical student at University College Dublin (UCD) who had joined the Irish Republican Army during the Irish War of Independence and was a member of Michael Collins's Squad, which assassinated British secret service agents on the morning of Bloody Sunday. Details of his grandfather's activities survive in UCD Archives, as well as Blackrock College. It also transpired that, on his mother's side, he is a close relative of Peter Fenelon Collier. O'Hanlon was schooled in Blackrock College in Dublin and graduated, in 1987, from the National Institute for Higher Education, Dublin (now Dublin City University), with a degree in communications studies. Career. Together with Kevin Gildea and Barry Murphy, O'Hanlon founded the International Comedy Cellar, upstairs in the International Bar on Dublin's South Wicklow Street. Dublin had no comedy scene at the time. As a stand up, O'Hanlon won the Hackney Empire New Act of the Year competition in 1994. For a time he was the presenter of "The Stand Up Show". He was spotted by Graham Linehan, who was to cast him as Father Dougal McGuire in "Father Ted" (1995–98). During filming, O’Hanlon went to buy shoes. Still being in costume, the seller thought he was a real priest and offered the footwear for free. In 1995 he received the Top TV Comedy Newcomer at the British Comedy Awards for this role. In 1995, he appeared (as Father Dougal) in a Channel 4 ident ("Hello, you're watching ... television"), and during Comic Relief on BBC1. This was followed by the award-winning short comedy film "Flying Saucer Rock'n'Roll". In a 2019 interview, O'Hanlon admitted that he had attempted to distance himself from "Father Ted" once the show had finished. O'Hanlon moved into straight acting alongside Emma Fielding and Beth Goddard in the ITV comedy-drama "Big Bad World", which aired for two series in summer 1999 and winter 2001. He also played a minor role in "The Butcher Boy" as Joe's (Francie's best friend) father, and appeared in an episode of the original "Whose Line is it Anyway?". In 2000, O'Hanlon starred in the comedy series "My Hero", in which he played a very naive superhero from the planet Ultron. His character juggled world-saving heroics with life in suburbia. He stayed in the role until the first episode of series 6 in July 2006, when he was replaced by James Dreyfus during the same episode. O'Hanlon also provided the voice of the lead character in the three Christmas television cartoon specials of "Robbie the Reindeer". He appeared in the 2005 BBC One sitcom "Blessed", written by Ben Elton; at the 2005 British Comedy Awards, it was publicly slated by Jonathan Ross, albeit in jest. Towards the end of 2005, he played an eccentric Scottish character, Coconut Tam, in the family based film, "The Adventures of Greyfriars Bobby". He has also appeared on radio, including an appearance on "Quote... Unquote" on BBC Radio 4 on 18 July 2011. Appropriately, one of his questions concerned a quotation from "Father Ted". In 2015, he appeared as incompetent angel Smallbone in the sitcom "The Best Laid Plans", on the same channel. In 2006, O'Hanlon wrote and presented an RTÉ television series called "Leagues Apart", which saw him investigate the biggest and most passionate football rivalries in a number of European countries. Included were Roma vs Lazio in Italy, Barcelona vs Real Madrid in Spain, and Galatasaray vs Fenerbahce in Turkey. He followed this with another RTÉ show, "So You Want To Be Taoiseach?" in 2007. It was a political series in which O'Hanlon gave tongue-in-cheek advice on how to go about becoming Taoiseach of Ireland. He appeared in the "Doctor Who" episode "Gridlock", broadcast on 14 April 2007, in which he played a catlike creature named Thomas Kincade Brannigan. O'Hanlon appears in series 3 of the TV show "Skins", playing Naomi Campbell (Lily Loveless)'s politics teacher named Kieran, who attempted to kiss her. He then went on to form a relationship with Naomi's mother (Olivia Colman). O'Hanlon plays the lead role in Irish comedy television programme "Val Falvey, TD" on RTÉ One. He has recently performed in the Edinburgh Fringe. In February 2011, O'Hanlon returned to the Gate Theatre, Dublin starring in the Irish premiere of Christopher Hampton's translation of Yasmina Reza's "God of Carnage", alongside Maura Tierney. Later that year, he appeared in the comedy panel show "Argumental". O'Hanlon has written a novel, "The Talk of the Town" (known in the United States as "Knick Knack Paddy Whack"), which was published in 1998. The novel is about a teenage boy, Patrick Scully, and his friends. In February 2015 he officially launched the 2015 Sky Cat Laughs Comedy Festival, which took place in Kilkenny from 28 May–1 June. In 2015 he played the role of Peter the Milkman in the Sky One sitcom "After Hours". On 2 February 2017, it was announced he would play the lead role in the BBC crime drama "Death in Paradise" taking the role of DI Jack Mooney following Kris Marshall's departure the same day. He announced his intention to leave the series in early 2020 and was replaced by Ralf Little. On 25 November 2021, it was announced that he would participate in series 13 of "Taskmaster". He finished in 4th place ahead of Judi Love. Personal life. O'Hanlon met his wife Melanie as a teenager. They have three children. He is a supporter of Leeds United.
2400
AMD
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets. The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009. AMD's main products include microprocessors, motherboard chipsets, embedded processors, graphics processors, and FPGAs for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center and gaming markets, and has announced plans to enter the high-performance computing market. History. First twelve years. Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968. In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid. In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available. In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million. AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09. Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976. In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors. Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation. Technology exchange agreement with Intel. Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled. Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips. In 1983, it introduced INT.STD.1000, the highest manufacturing quality standard in the industry. The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book "The 100 Best Companies to Work for in America", and later made the "Fortune" 500 list for the first time in 1985. By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips. AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel. AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO. Acquisition of ATI, spin-off of GlobalFoundries, and acquisition of Xilinx. On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for a total of approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name. In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009. In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue. AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip. On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June. On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014. After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments. In October 2020, AMD announced that it was acquiring Xilinx in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion. Products. CPUs and APUs. IBM PC and the x86 architecture. In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD. In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units. In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor. Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors. K5, K6, Athlon, Duron, and Sempron. AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked. In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor). The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64KB instead of 256KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3. On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512KB L2 Cache was released. Athlon 64, Opteron and Phenom. The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64. On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment. In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, as well as an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm. In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, as well as a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform. In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed. The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process. Fusion becomes the AMD APU. Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed "Fusion" was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit). Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, as well as northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card. New microarchitectures. High-power, high-performance Bulldozer cores. Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace. The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism. In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency. Low-power Cat cores. The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W. Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014. ARM architecture-based designs. In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, and 8-core Cortex-A57 based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release. In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 is expected to be entirely custom designed targeting server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned, in preference to the development of AMD's x86-based Zen microarchitecture. Zen-based CPUs and APUs. Zen is a new architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015. One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient. The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory. AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020. As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture. The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs. Graphics products and GPUs. Radeon within AMD. In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014. Combined GPU and CPU divisions. In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017. Radeon Technologies Group. In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities. In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs. Semi-custom and game console products. In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory. Other hardware. AMD motherboard chipsets. Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia. The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors. As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename "RS690"), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed "RS600", sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI. On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed "Spider" desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed "Cartwheel" platform. AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS–based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX). With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality. AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia. Embedded products. Embedded CPUs. In the early 1990s, AMD began marketing a series of embedded System-on-a-chip (SoC) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHze.g. was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series. In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications. In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, as well as the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015. AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability. The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom. In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces. In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016. In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system on a chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory. Embedded graphics. AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology. Current product lines. CPU and APU products. AMD's portfolio of CPUs and APUs Graphics products. AMD's portfolio of dedicated graphics processors Radeon-branded products. RAM. In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business. Solid-state drives. AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface. Technologies. CPU hardware. technologies found in AMD CPU/APU and other products include: Graphics hardware. technologies found in AMD GPU products include: Software. AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade. For the following mentions, software not expressely stated free can be assumed to be proprietary. Distribution. AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux. GPU. Most notable public AMD software is on the GPU side. AMD has opened both its graphic and compute stacks: Production and fabrication. Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication. In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009. With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past. In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021. Corporate affairs. Partnerships. AMD uses strategic industry partnerships to further its business interests as well as to rival Intel's dominance and resources: Litigation with Intel. AMD has a long history of litigation with former (and current) partner and x86 creator Intel. External links.
2402
Albrecht Dürer
Albrecht Dürer (; ; 21 May 1471 – 6 April 1528), sometimes spelled in English as Durer, was a German painter, printmaker, and theorist of the German Renaissance. Born in Nuremberg, Dürer established his reputation and influence across Europe in his twenties due to his high-quality woodcut prints. He was in contact with the major Italian artists of his time, including Raphael, Giovanni Bellini, and Leonardo da Vinci, and from 1512 was patronized by Emperor Maximilian I. Dürer's vast body of work includes engravings, his preferred technique in his later prints, altarpieces, portraits and self-portraits, watercolours and books. The woodcuts series are more Gothic than the rest of his work. His well-known engravings include the three "Meisterstiche" (master prints) "Knight, Death and the Devil" (1513), "Saint Jerome in his Study" (1514), and "Melencolia I" (1514). His watercolours mark him as one of the first European landscape artists, while his woodcuts revolutionised the potential of that medium. Dürer's introduction of classical motifs into Northern art, through his knowledge of Italian artists and German humanists, has secured his reputation as one of the most important figures of the Northern Renaissance. This is reinforced by his theoretical treatises, which involve principles of mathematics, perspective, and ideal proportions. Biography. Early life (1471–1490). Dürer was born on 21 May 1471, the third child and second son of Albrecht Dürer the Elder and Barbara Holper, who married in 1467 and had eighteen children together. Albrecht Dürer the Elder (originally Albrecht Ajtósi) was a successful goldsmith who by 1455 had moved to Nuremberg from Ajtós, near Gyula in Hungary. He married Holper, his master's daughter, when he himself qualified as a master. One of Albrecht's brothers, Hans Dürer, was also a painter and trained under him. Another of Albrecht's brothers, Endres Dürer, took over their father's business and was a master goldsmith. The German name "Dürer" is a translation from the Hungarian, "Ajtósi". Initially, it was "Türer", meaning doormaker, which is "ajtós" in Hungarian (from "ajtó", meaning door). A door is featured in the coat-of-arms the family acquired. Albrecht Dürer the Younger later changed "Türer", his father's diction of the family's surname, to "Dürer", to adapt to the local Nuremberg dialect. Dürer's godfather Anton Koberger left goldsmithing to become a printer and publisher in the year of Dürer's birth. He became the most successful publisher in Germany, eventually owning twenty-four printing-presses and a number of offices in Germany and abroad. Koberger's most famous publication was the "Nuremberg Chronicle", published in 1493 in German and Latin editions. It contained an unprecedented 1,809 woodcut illustrations (albeit with many repeated uses of the same block) by the Wolgemut workshop. Dürer may have worked on some of these, as the work on the project began while he was with Wolgemut. Because Dürer left autobiographical writings and was widely known by his mid-twenties, his life is well documented in several sources. After a few years of school, Dürer learned the basics of goldsmithing and drawing from his father. Though his father wanted him to continue his training as a goldsmith, he showed such a precocious talent in drawing that he started as an apprentice to Michael Wolgemut at the age of fifteen in 1486. A self-portrait, a drawing in silverpoint, is dated 1484 (Albertina, Vienna) "when I was a child", as his later inscription says. The drawing is one of the earliest surviving children's drawings of any kind, and, as Dürer's Opus One, has helped define his oeuvre as deriving from, and always linked to, himself. Wolgemut was the leading artist in Nuremberg at the time, with a large workshop producing a variety of works of art, in particular woodcuts for books. Nuremberg was then an important and prosperous city, a centre for publishing and many luxury trades. It had strong links with Italy, especially Venice, a relatively short distance across the Alps. "Wanderjahre" and marriage (1490–1494). After completing his apprenticeship, Dürer followed the common German custom of taking "Wanderjahre"—in effect gap years—in which the apprentice learned skills from artists in other areas; Dürer was to spend about four years away. He left in 1490, possibly to work under Martin Schongauer, the leading engraver of Northern Europe, but who died shortly before Dürer's arrival at Colmar in 1492. It is unclear where Dürer travelled in the intervening period, though it is likely that he went to Frankfurt and the Netherlands. In Colmar, Dürer was welcomed by Schongauer's brothers, the goldsmiths Caspar and Paul and the painter Ludwig. Later that year, Dürer travelled to Basel to stay with another brother of Martin Schongauer, the goldsmith Georg. In 1493 Dürer went to Strasbourg, where he would have experienced the sculpture of Nikolaus Gerhaert. Dürer's first painted self-portrait (now in the Louvre) was painted at this time, probably to be sent back to his fiancée in Nuremberg. Very soon after his return to Nuremberg, on 7 July 1494, at the age of 23, Dürer was married to Agnes Frey following an arrangement made during his absence. Agnes was the daughter of a prominent brass worker (and amateur harpist) in the city. However, no children resulted from the marriage, and with Albrecht the Dürer name died out. The marriage between Agnes and Albrecht was not a generally happy one, as indicated by the letters of Dürer in which he quipped to Willibald Pirckheimer in an extremely rough tone about his wife. He called her an "old crow" and made other vulgar remarks. Pirckheimer also made no secret of his antipathy towards Agnes, describing her as a miserly shrew with a bitter tongue, who helped cause Dürer's death at a young age. It has been hypothesized by many scholars that Albrecht was bisexual or homosexual, due to the recurrence of homoerotic themes in his works (e.g. "The Men's Bath"), and the nature of his correspondence with close friends. First journey to Italy (1494–1495). Within three months of his marriage, Dürer left for Italy, alone, perhaps stimulated by an outbreak of plague in Nuremberg. He made watercolour sketches as he traveled over the Alps. Some have survived and others may be deduced from accurate landscapes of real places in his later work, for example his engraving "Nemesis". In Italy, he went to Venice to study its more advanced artistic world. Through Wolgemut's tutelage, Dürer had learned how to make prints in drypoint and design woodcuts in the German style, based on the works of Schongauer and the Housebook Master. He also would have had access to some Italian works in Germany, but the two visits he made to Italy had an enormous influence on him. He wrote that Giovanni Bellini was the oldest and still the best of the artists in Venice. His drawings and engravings show the influence of others, notably Antonio del Pollaiuolo, with his interest in the proportions of the body; Lorenzo di Credi; and Andrea Mantegna, whose work he produced copies of while training. Dürer probably also visited Padua and Mantua on this trip. Return to Nuremberg (1495–1505). On his return to Nuremberg in 1495, Dürer opened his own workshop (being married was a requirement for this). Over the next five years, his style increasingly integrated Italian influences into underlying Northern forms. Arguably his best works in the first years of the workshop were his woodcut prints, mostly religious, but including secular scenes such as "The Men's Bath House" (). These were larger and more finely cut than the great majority of German woodcuts hitherto, and far more complex and balanced in composition. It is now thought unlikely that Dürer cut any of the woodblocks himself; this task would have been performed by a specialist craftsman. However, his training in Wolgemut's studio, which made many carved and painted altarpieces and both designed and cut woodblocks for woodcut, evidently gave him great understanding of what the technique could be made to produce, and how to work with block cutters. Dürer either drew his design directly onto the woodblock itself, or glued a paper drawing to the block. Either way, his drawings were destroyed during the cutting of the block. His series of sixteen designs for the "Apocalypse" is dated 1498, as is his engraving of "St. Michael Fighting the Dragon". He made the first seven scenes of the "Great Passion" in the same year, and a little later, a series of eleven on the Holy Family and saints. The "Seven Sorrows Polyptych", commissioned by Frederick III of Saxony in 1496, was executed by Dürer and his assistants c. 1500. In 1502, Dürer's father died. Around 1503–1505 Dürer produced the first 17 of a set illustrating the "Life of the Virgin", which he did not finish for some years. Neither these nor the "Great Passion" were published as sets until several years later, but prints were sold individually in considerable numbers. During the same period Dürer trained himself in the difficult art of using the burin to make engravings. It is possible he had begun learning this skill during his early training with his father, as it was also an essential skill of the goldsmith. In 1496 he executed the "Prodigal Son", which the Italian Renaissance art historian Giorgio Vasari singled out for praise some decades later, noting its Germanic quality. He was soon producing some spectacular and original images, notably "Nemesis" (1502), "The Sea Monster" (1498), and "Saint Eustace" (c. 1501), with a highly detailed landscape background and animals. His landscapes of this period, such as "Pond in the Woods" and "Willow Mill", are quite different from his earlier watercolours. There is a much greater emphasis on capturing atmosphere, rather than depicting topography. He made a number of Madonnas, single religious figures, and small scenes with comic peasant figures. Prints are highly portable and these works made Dürer famous throughout the main artistic centres of Europe within a very few years. The Venetian artist Jacopo de' Barbari, whom Dürer had met in Venice, visited Nuremberg in 1500, and Dürer said that he learned much about the new developments in perspective, anatomy, and proportion from him. De' Barbari was unwilling to explain everything he knew, so Dürer began his own studies, which would become a lifelong preoccupation. A series of extant drawings show Dürer's experiments in human proportion, leading to the famous engraving of "Adam and Eve" (1504), which shows his subtlety while using the burin in the texturing of flesh surfaces. This is the only existing engraving signed with his full name. Dürer created large numbers of preparatory drawings, especially for his paintings and engravings, and many survive, most famously the "Betende Hände" ("Praying Hands") from circa 1508, a study for an apostle in the Heller altarpiece. He continued to make images in watercolour and bodycolour (usually combined), including a number of still lifes of meadow sections or animals, including his "Young Hare" (1502) and the "Great Piece of Turf" (1503). Second journey to Italy (1505–1507). In Italy, he returned to painting, at first producing a series of works executed in tempera on linen. These include portraits and altarpieces, notably, the Paumgartner altarpiece and the "Adoration of the Magi". In early 1506, he returned to Venice and stayed there until the spring of 1507. By this time Dürer's engravings had attained great popularity and were being copied. In Venice he was given a valuable commission from the emigrant German community for the church of San Bartolomeo. This was the altar-piece known as the "Adoration of the Virgin" or the "Feast of Rose Garlands". It includes portraits of members of Venice's German community, but shows a strong Italian influence. It was later acquired by the Emperor Rudolf II and taken to Prague. Nuremberg and the masterworks (1507–1520). Despite the regard in which he was held by the Venetians, Dürer returned to Nuremberg by mid-1507, remaining in Germany until 1520. His reputation had spread throughout Europe and he was on friendly terms and in communication with most of the major artists including Raphael. Between 1507 and 1511 Dürer worked on some of his most celebrated paintings: "Adam and Eve" (1507), "Martyrdom of the Ten Thousand" (1508, for Frederick of Saxony), "Virgin with the Iris" (1508), the altarpiece "Assumption of the Virgin" (1509, for Jacob Heller of Frankfurt), and "Adoration of the Trinity" (1511, for Matthaeus Landauer). During this period he also completed two woodcut series, the "Great Passion" and the "Life of the Virgin", both published in 1511 together with a second edition of the "Apocalypse" series. The post-Venetian woodcuts show Dürer's development of chiaroscuro modelling effects, creating a mid-tone throughout the print to which the highlights and shadows can be contrasted. Other works from this period include the thirty-seven "Little Passion" woodcuts, first published in 1511, and a set of fifteen small engravings on the same theme in 1512. Complaining that painting did not make enough money to justify the time spent when compared to his prints, he produced no paintings from 1513 to 1516. In 1513 and 1514 Dürer created his three most famous engravings: "Knight, Death and the Devil" (1513, probably based on Erasmus's "Handbook of a Christian Knight"), "St. Jerome in His Study", and the much-debated "Melencolia I" (both 1514, the year Dürer's mother died). Further outstanding pen and ink drawings of Dürer's period of art work of 1513 were drafts for his friend Pirckheimer. These drafts were later used to design Lusterweibchen chandeliers, combining an antler with a wooden sculpture. In 1515, he created his "woodcut of a Rhinoceros" which had arrived in Lisbon from a written description and sketch by another artist, without ever seeing the animal himself. An image of the Indian rhinoceros, the image has such force that it remains one of his best-known and was still used in some German school science text-books as late as last century. In the years leading to 1520 he produced a wide range of works, including the woodblocks for the first western printed star charts in 1515 and portraits in tempera on linen in 1516. His only experiments with etching came in this period, producing five between 1515–1516 and a sixth in 1518; a technique he may have abandoned as unsuited to his aesthetic of methodical, classical form. Patronage of Maximilian I. From 1512, Maximilian I became Dürer's major patron. He commissioned "The Triumphal Arch", a vast work printed from 192 separate blocks, the symbolism of which is partly informed by Pirckheimer's translation of Horapollo's "Hieroglyphica". The design program and explanations were devised by Johannes Stabius, the architectural design by the master builder and court-painter Jörg Kölderer and the woodcutting itself by Hieronymous Andreae, with Dürer as designer-in-chief. "The Arch" was followed by "The Triumphal Procession", the program of which was worked out in 1512 by and includes woodcuts by Albrecht Altdorfer and Hans Springinklee, as well as Dürer. Dürer worked with pen on the marginal images for an edition of the Emperor's printed Prayer-Book; these were quite unknown until facsimiles were published in 1808 as part of the first book published in lithography. Dürer's work on the book was halted for an unknown reason, and the decoration was continued by artists including Lucas Cranach the Elder and Hans Baldung. Dürer also made several portraits of the Emperor, including one shortly before Maximilian's death in 1519. Maximilian was a very cash-strapped prince who sometimes failed to pay, yet turned out to be Dürer's most important patron. In his court, artists and learned men were respected, which was not common at that time (later, Dürer commented that in Germany, as a non-noble, he was treated as a parasite). Pirckheimer (who he met in 1495, before entering the service of Maximilian) was also an important personage in the court and great cultural patron, who had a strong influence on Dürer as his tutor in classical knowledge and humanistic critical methodology, as well as collaborator. In Maximilian's court, Dürer also collaborated with a great number of other brilliant artists and scholars of the time who became his friends, like Johannes Stabius, Konrad Peutinger, Conrad Celtes, and Hans Tscherte (an imperial architect). Dürer manifested a strong pride in his ability, as a prince of his profession. One day, the emperor, trying to show Dürer an idea, tried to sketch with the charcoal himself, but always broke it. Dürer took the charcoal from Maximilian's hand, finished the drawing and told him: "This is my scepter." In another occasion, Maximilian noticed that the ladder Dürer used was too short and unstable, thus told a noble to hold it for him. The noble refused, saying that it was beneath him to serve a non-noble. Maximilian then came to hold the ladder himself, and told the noble that he could make a noble out of a peasant any day, but he could not make an artist like Dürer out of a noble. This story and a 1849 painting depicting it by have become relevant recently. This nineteenth-century painting shows Dürer painting a mural at St. Stephen's Cathedral, Vienna. Apparently, this reflects a seventeenth-century "artists' legend" about the previously mentioned encounter (in which the emperor held the ladder) – that this encounter corresponds with the period Dürer was working on the Viennese murals. In 2020, during restoration work, art connoisseurs discovered a piece of handwriting now attributed to Dürer, suggesting that the Nuremberg master had actually participated in creating the murals at St. Stephen's Cathedral. In the recent 2022 Dürer exhibition in Nuremberg (in which the drawing technique is also traced and connected to Dürer's other works), the identity of the commissioner is discussed. Now the painting of Siegert (and the legend associated with it) is used as evidence to suggest that this was Maximilian. Dürer is historically recorded to have entered the emperor's service in 1511, and the mural's date is calculated to be around 1505, but it is possible they have known and worked with each other earlier than 1511. Cartographic and astronomical works. Dürer's exploration of space led to a relationship and cooperation with the court astronomer Johannes Stabius. Stabius also often acted as Dürer's and Maximilian's go-between for their financial problems. In 1515 Dürer and Stabius created the first world map projected on a solid geometric sphere. Also in 1515, Stabius, Dürer and the astronomer produced the first planispheres of both southern and northerns hemispheres, as well as the first printed celestial maps, which prompted the revival of interest in the field of uranometry throughout Europe. Journey to the Netherlands (1520–1521). Maximilian's death came at a time when Dürer was concerned he was losing "my sight and freedom of hand" (perhaps caused by arthritis) and increasingly affected by the writings of Martin Luther. In July 1520 Dürer made his fourth and last major journey, to renew the Imperial pension Maximilian had given him and to secure the patronage of the new emperor, Charles V, who was to be crowned at Aachen. Dürer journeyed with his wife and her maid via the Rhine to Cologne and then to Antwerp, where he was well received and produced numerous drawings in silverpoint, chalk and charcoal. In addition to attending the coronation, he visited Cologne (where he admired the painting of Stefan Lochner), Nijmegen, 's-Hertogenbosch, Bruges (where he saw Michelangelo's "Madonna of Bruges"), Ghent (where he admired van Eyck's "Ghent altarpiece"), and Zeeland. Dürer took a large stock of prints with him and wrote in his diary to whom he gave, exchanged or sold them, and for how much. This provides rare information of the monetary value placed on prints at this time. Unlike paintings, their sale was very rarely documented. While providing valuable documentary evidence, Dürer's Netherlandish diary also reveals that the trip was not a profitable one. For example, Dürer offered his last portrait of Maximilian to his daughter, Margaret of Austria, but eventually traded the picture for some white cloth after Margaret disliked the portrait and declined to accept it. During this trip he also met Bernard van Orley, Jan Provoost, Gerard Horenbout, Jean Mone, Joachim Patinir and Tommaso Vincidor, though he did not, it seems, meet Quentin Matsys. Having secured his pension, Dürer returned home in July 1521, having caught an undetermined illness, which afflicted him for the rest of his life, and greatly reduced his rate of work. Final years, Nuremberg (1521–1528). On his return to Nuremberg, Dürer worked on a number of grand projects with religious themes, including a crucifixion scene and a Sacra conversazione, though neither was completed. This may have been due in part to his declining health, but perhaps also because of the time he gave to the preparation of his theoretical works on geometry and perspective, the proportions of men and horses, and fortification. However, one consequence of this shift in emphasis was that during the last years of his life, Dürer produced comparatively little as an artist. In painting, there was only a portrait of , a , , and two panels showing St. John with St. Peter in and St. Paul with St. Mark in the . This last great work, "the Four Apostles", was given by Dürer to the City of Nuremberg—although he was given 100 guilders in return. As for engravings, Dürer's work was restricted to portraits and illustrations for his treatise. The portraits include Cardinal-Elector Albert of Mainz; Frederick the Wise, elector of Saxony; the humanist scholar Willibald Pirckheimer; Philipp Melanchthon, and Erasmus of Rotterdam. For those of the Cardinal, Melanchthon, and Dürer's final major work, a drawn portrait of the Nuremberg patrician Ulrich Starck, Dürer depicted the sitters in profile. Despite complaining of his lack of a formal classical education, Dürer was greatly interested in intellectual matters and learned much from his boyhood friend Willibald Pirckheimer, whom he no doubt consulted on the content of many of his images. He also derived great satisfaction from his friendships and correspondence with Erasmus and other scholars. Dürer succeeded in producing two books during his lifetime. "The Four Books on Measurement" were published at Nuremberg in 1525 and was the first book for adults on mathematics in German, as well as being cited later by Galileo and Kepler. The other, a work on city fortifications, was published in 1527. "The Four Books on Human Proportion" were published posthumously, shortly after his death in 1528. Dürer died in Nuremberg at the age of 56, leaving an estate valued at 6,874 florins – a considerable sum. He is buried in the "Johannisfriedhof" cemetery. His large house (purchased in 1509 from the heirs of the astronomer Bernhard Walther), where his workshop was located and where his widow lived until her death in 1539, remains a prominent Nuremberg landmark. Dürer and the Reformation. Dürer's writings suggest that he may have been sympathetic to Luther's ideas, though it is unclear if he ever left the Catholic Church. Dürer wrote of his desire to draw Luther in his diary in 1520: "And God help me that I may go to Dr. Martin Luther; thus I intend to make a portrait of him with great care and engrave him on a copper plate to create a lasting memorial of the Christian man who helped me overcome so many difficulties." In a letter to Nicholas Kratzer in 1524, Dürer wrote, "because of our Christian faith we have to stand in scorn and danger, for we are reviled and called heretics". Most tellingly, Pirckheimer wrote in a letter to Johann Tscherte in 1530: "I confess that in the beginning I believed in Luther, like our Albert of blessed memory ... but as anyone can see, the situation has become worse." Dürer may even have contributed to the Nuremberg City Council's mandating Lutheran sermons and services in March 1525. Notably, Dürer had contacts with various reformers, such as Zwingli, Andreas Karlstadt, Melanchthon, Erasmus and Cornelius Grapheus from whom Dürer received Luther's "Babylonian Captivity" in 1520. Yet Erasmus and C. Grapheus are better said to be Catholic change agents. Also, from 1525, "the year that saw the peak and collapse of the Peasants' War, the artist can be seen to distance himself somewhat from the [Lutheran] movement..." Dürer's later works have also been claimed to show Protestant sympathies. His 1523 "The Last Supper" woodcut has often been understood to have an evangelical theme, focusing as it does on Christ espousing the Gospel, as well as the inclusion of the Eucharistic cup, an expression of Protestant utraquism, although this interpretation has been questioned. The delaying of the engraving of St Philip, completed in 1523 but not distributed until 1526, may have been due to Dürer's uneasiness with images of saints; even if Dürer was not an iconoclast, in his last years he evaluated and questioned the role of art in religion. Legacy and influence. Dürer exerted a huge influence on the artists of succeeding generations, especially in printmaking, the medium through which his contemporaries mostly experienced his art, as his paintings were predominantly in private collections located in only a few cities. His success in spreading his reputation across Europe through prints was undoubtedly an inspiration for major artists such as Raphael, Titian, and Parmigianino, all of whom collaborated with printmakers to promote and distribute their work. His engravings seem to have had an intimidating effect upon his German successors; the "Little Masters" who attempted few large engravings but continued Dürer's themes in small, rather cramped compositions. Lucas van Leyden was the only Northern European engraver to successfully continue to produce large engravings in the first third of the 16th century. The generation of Italian engravers who trained in the shadow of Dürer all either directly copied parts of his landscape backgrounds (Giulio Campagnola, Giovanni Battista Palumba, Benedetto Montagna and Cristofano Robetta), or whole prints (Marcantonio Raimondi and Agostino Veneziano). However, Dürer's influence became less dominant after 1515, when Marcantonio perfected his new engraving style, which in turn travelled over the Alps to also dominate Northern engraving. In painting, Dürer had relatively little influence in Italy, where probably only his altarpiece in Venice was seen, and his German successors were less effective in blending German and Italian styles. His intense and self-dramatizing self-portraits have continued to have a strong influence up to the present, especially on painters in the 19th and 20th century who desired a more dramatic portrait style. Dürer has never fallen from critical favour, and there have been significant revivals of interest in his works in Germany in the "Dürer Renaissance" of about 1570 to 1630, in the early nineteenth century, and in German nationalism from 1870 to 1945. The Lutheran Church commemorates Dürer annually on 6 April, along with Michelangelo, Lucas Cranach the Elder and Hans Burgkmair. Theoretical works. In all his theoretical works, in order to communicate his theories in the German language rather than in Latin, Dürer used graphic expressions based on a vernacular, craftsmen's language. For example, "Schneckenlinie" ("snail-line") was his term for a spiral form. Thus, Dürer contributed to the expansion in German prose which Luther had begun with his translation of the Bible. "Four Books on Measurement". Dürer's work on geometry is called the "Four Books on Measurement" ("Underweysung der Messung mit dem Zirckel und Richtscheyt" or "Instructions for Measuring with Compass and Ruler"). The first book focuses on linear geometry. Dürer's geometric constructions include helices, conchoids and epicycloids. He also draws on Apollonius, and Johannes Werner's 'Libellus super viginti duobus elementis conicis' of 1522. The second book moves onto two-dimensional geometry, i.e. the construction of regular polygons. Here Dürer favours the methods of Ptolemy over Euclid. The third book applies these principles of geometry to architecture, engineering and typography. In architecture Dürer cites Vitruvius but elaborates his own classical designs and columns. In typography, Dürer depicts the geometric construction of the Latin alphabet, relying on Italian precedent. However, his construction of the Gothic alphabet is based upon an entirely different modular system. The fourth book completes the progression of the first and second by moving to three-dimensional forms and the construction of polyhedra. Here Dürer discusses the five Platonic solids, as well as seven Archimedean semi-regular solids, as well as several of his own invention. "Four Books on Human Proportion". Dürer's work on human proportions is called the "Four Books on Human Proportion" ("Vier Bücher von Menschlicher Proportion") of 1528. The first book was mainly composed by 1512/13 and completed by 1523, showing five differently constructed types of both male and female figures, all parts of the body expressed in fractions of the total height. Dürer based these constructions on both Vitruvius and empirical observations of "two to three hundred living persons", in his own words. The second book includes eight further types, broken down not into fractions but an Albertian system, which Dürer probably learned from Francesco di Giorgio's of 1525. In the third book, Dürer gives principles by which the proportions of the figures can be modified, including the mathematical simulation of convex and concave mirrors; here Dürer also deals with human physiognomy. The fourth book is devoted to the theory of movement. Appended to the last book, however, is a self-contained essay on aesthetics, which Dürer worked on between 1512 and 1528, and it is here that we learn of his theories concerning 'ideal beauty'. Dürer rejected Alberti's concept of an objective beauty, proposing a relativist notion of beauty based on variety. Nonetheless, Dürer still believed that truth was hidden within nature, and that there were rules which ordered beauty, even though he found it difficult to define the criteria for such a code. In 1512/13 his three criteria were function ('Nutz'), naïve approval ('Wohlgefallen') and the happy medium ('Mittelmass'). However, unlike Alberti and Leonardo, Dürer was most troubled by understanding not just the abstract notions of beauty but also as to how an artist can create beautiful images. Between 1512 and the final draft in 1528, Dürer's belief developed from an understanding of human creativity as spontaneous or inspired to a concept of 'selective inward synthesis'. In other words, that an artist builds on a wealth of visual experiences in order to imagine beautiful things. Dürer's belief in the abilities of a single artist over inspiration prompted him to assert that "one man may sketch something with his pen on half a sheet of paper in one day, or may cut it into a tiny piece of wood with his little iron, and it turns out to be better and more artistic than another's work at which its author labours with the utmost diligence for a whole year". "Book on Fortification". In 1527, Dürer also published "Various Lessons on the Fortification of Cities, Castles, and Localities" ("Etliche Underricht zu Befestigung der Stett, Schloss und Flecken"). It was printed in Nuremberg, probably by Hieronymus Andreae and reprinted in 1603 by Johan Janssenn in Arnhem. In 1535 it was also translated into Latin as "On Cities, Forts, and Castles, Designed and Strengthened by Several Manners: Presented for the Most Necessary Accommodation of War" ("De vrbibus, arcibus, castellisque condendis, ac muniendis rationes aliquot : praesenti bellorum necessitati accommodatissimae"), published by Christian Wechel (Wecheli/Wechelus) in Paris. The work is less proscriptively theoretical than his other works, and was soon overshadowed by the Italian theory of polygonal fortification (the "trace italienne" – see Bastion fort), though his designs seem to have had some influence in the eastern German lands and up into the Baltic region. Fencing. Dürer created many sketches and woodcuts of soldiers and knights over the course of his life. His most significant martial works, however, were made in 1512 as part of his efforts to secure the patronage of Maximilian I. Using existing manuscripts from the Nuremberg Group as his reference, his workshop produced the extensive Οπλοδιδασκαλια sive Armorvm Tractandorvm Meditatio Alberti Dvreri ("Weapon Training, or Albrecht Dürer's Meditation on the Handling of Weapons", MS 26-232). Another manuscript based on the Nuremberg texts as well as one of Hans Talhoffer's works, the untitled Berlin Picture Book (Libr.Pict.A.83), is also thought to have originated in his workshop around this time. These sketches and watercolors show the same careful attention to detail and human proportion as Dürer's other work, and his illustrations of grappling, long sword, dagger, and messer are among the highest-quality in any fencing manual.
2403
Australian rules football
Australian football, also called Australian rules football or Aussie rules, or more simply football or footy, is a contact sport played between two teams of 18 players on an oval field, often a modified cricket ground. Points are scored by kicking the oval ball between the central goal posts (worth six points), or between a central and outer post (worth one point, otherwise known as a "behind"). During general play, players may position themselves anywhere on the field and use any part of their bodies to move the ball. The primary methods are kicking, handballing and running with the ball. There are rules on how the ball can be handled; for example, players running with the ball must intermittently bounce or touch it on the ground. Throwing the ball is not allowed, and players must not get caught holding the ball. A distinctive feature of the game is the mark, where players anywhere on the field who catch the ball from a kick (with specific conditions) are awarded unimpeded possession. Possession of the ball is in dispute at all times except when a free kick or mark is paid. Players can tackle using their hands or use their whole body to obstruct opponents. Dangerous physical contact (such as pushing an opponent in the back), interference when marking, and deliberately slowing the play are discouraged with free kicks, distance penalties, or suspension for a certain number of matches depending on the severity of the infringement. The game features frequent physical contests, spectacular marking, fast movement of both players and the ball, and high scoring. The sport's origins can be traced to football matches played in Melbourne, Victoria, in 1858, inspired by English public school football games. Seeking to develop a game more suited to adults and Australian conditions, the Melbourne Football Club published the first laws of Australian football in May 1859. Australian football has the highest spectator attendance and television viewership of all sports in Australia, while the Australian Football League (AFL), the sport's only fully professional competition, is the nation's wealthiest sporting body. The AFL Grand Final, held annually at the Melbourne Cricket Ground, is the highest attended club championship event in the world. The sport is also played at amateur level in many countries and in several variations. Its rules are governed by the AFL Commission with the advice of the AFL's Laws of the Game Committee. Name. Australian rules football is known by several nicknames, including Aussie rules, football and footy. In some regions, the Australian Football League markets the game as AFL after itself. History. Origins. Primitive forms of football were played sporadically in the Australian colonies in the first half of the 19th century. Compared to cricket and horse racing, football was considered a mere "amusement" by colonists at the time, and while little is known about these early one-off games, evidence does not support a causal link with Australian football. In Melbourne, Victoria, in 1858, in a move that would help to shape Australian football in its formative years, private schools (then termed "public schools" in accordance with English nomenclature) began organising football games inspired by precedents at English public schools. The earliest match, held on 15 June, was between Melbourne Grammar and St Kilda Grammar. On 10 July 1858, the Melbourne-based "Bell's Life in Victoria and Sporting Chronicle" published a letter by Tom Wills, captain of the Victoria cricket team, calling for the formation of a "foot-ball club" with a "code of laws" to keep cricketers fit during winter. Born in Australia, Wills played a nascent form of rugby football whilst a pupil at Rugby School in England, and returned to his homeland a star athlete and cricketer. Two weeks later, Wills' friend, cricketer Jerry Bryant, posted an advertisement for a scratch match at the Richmond Paddock adjoining the Melbourne Cricket Ground (MCG). This was the first of several "kickabouts" held that year involving members of the Melbourne Cricket Club, including Wills, Bryant, W. J. Hammersley and J. B. Thompson. Trees were used as goalposts and play typically lasted an entire afternoon. Without an agreed-upon code of laws, some players were guided by rules they had learned in the British Isles, "others by no rules at all". Another milestone in 1858 was a 40-a-side match played under experimental rules between Melbourne Grammar and Scotch College, held at the Richmond Paddock. Umpired by Wills and teacher John Macadam, it began on 7 August and continued over two subsequent Saturdays, ending in a draw with each side kicking one goal. It is commemorated with a statue outside the MCG, and the two schools have since competed annually in the Cordner–Eggleston Cup, the world's oldest continuous football competition. Since the early 20th century, it has been suggested that Australian football was derived from the Irish sport of Gaelic football. However, there is no archival evidence in favour of a Gaelic influence, and the style of play shared between the two modern codes appeared in Australia long before the Irish game evolved in a similar direction. Another theory, first proposed in 1983, posits that Wills, having grown up amongst Aboriginals in Victoria, may have seen or played the Aboriginal ball game of Marn Grook, and incorporated some of its features into early Australian football. There is only circumstantial evidence that he knew of the game, and according to biographer Greg de Moore's research, Wills was "almost solely influenced by his experience at Rugby School". First rules. A loosely organised Melbourne side, captained by Wills, played against other football enthusiasts in the winter and spring of 1858. The following year, on 14 May, the Melbourne Football Club was officially established, making it one of the world's oldest football clubs. Three days later, Wills, Hammersley, Thompson and teacher Thomas H. Smith met near the MCG at the Parade Hotel, owned by Bryant, and drafted ten rules: "The Rules of the Melbourne Football Club". These are the laws from which Australian football evolved. The club aimed to create a simple code suited to the hard playing surfaces around Melbourne, and to eliminate the roughest aspects of English school games—such as "hacking" (shin-kicking) in Rugby School football—to lessen the chance of injuries to working men. In another significant departure from English public school football, the Melbourne rules omitted any offside law. "The new code was as much a reaction against the school games as influenced by them", writes Mark Pennings. The rules were distributed throughout the colony; Thompson in particular did much to promote the new code in his capacity as a journalist. Early competition in Victoria. Following Melbourne's lead, Geelong and Melbourne University also formed football clubs in 1859. While many early Victorian teams participated in one-off matches, most had not yet formed clubs for regular competition. A South Yarra club devised its own rules. To ensure the supremacy of the Melbourne rules, the first-club level competition in Australia, the Caledonian Society's Challenge Cup (1861–64), stipulated that only the Melbourne rules were to be used. This law was reinforced by the Athletic Sports Committee (ASC), which ran a variation of the Challenge Cup in 1865–66. With input from other clubs, the rules underwent several minor revisions, establishing a uniform code known as "Victorian rules". In 1866, the "first distinctively Victorian rule", the running bounce, was formalised at a meeting of club delegates chaired by H. C. A. Harrison, an influential pioneer who took up football in 1859 at the invitation of Wills, his cousin. The game around this time was defensive and low-scoring, played low to the ground in congested rugby-style scrimmages. The typical match was a 20-per-side affair, played with a ball that was roughly spherical, and lasted until a team scored two goals. The shape of the playing field was not standardised; matches often took place in rough, tree-spotted public parks, most notably the Richmond Paddock (Yarra Park), known colloquially as the Melbourne Football Ground. Wills argued that the turf of cricket fields would benefit from being trampled upon by footballers in winter, and, as early as 1859, football was allowed on the MCG. However, cricket authorities frequently prohibited football on their grounds until the 1870s, when they saw an opportunity to capitalise on the sport's growing popularity. Football gradually adapted to an oval-shaped field, and most grounds in Victoria expanded to accommodate the dual purpose—a situation that continues to this day. Spread to other colonies. Football became organised in South Australia in 1860 with the formation of the Adelaide Football Club, the oldest football club in Australia outside Victoria. It devised its own rules, and, along with other Adelaide-based clubs, played a variety of codes until 1876, when they uniformly adopted most of the Victorian rules, with South Australian football pioneer Charles Kingston noting their similarity to "the old Adelaide rules". Similarly, Tasmanian clubs quarrelled over different rules until they adopted a slightly modified version of the Victorian game in 1879. The South Australian Football Association (SAFA), the sport's first governing body, formed on 30 April 1877, firmly establishing Victorian rules as the preferred code in that colony. The Victorian Football Association (VFA) formed the following month. Clubs began touring the colonies in the late 1870s, and in 1879 the first intercolonial match took place in Melbourne between Victoria and South Australia. In order to standardise the sport across Australia, delegates representing the football associations of South Australia, Tasmania, Victoria and Queensland met in 1883 and updated the code. New rules such as holding the ball led to a "golden era" of fast, long-kicking and high-marking football in the 1880s, a time which also saw players such as George Coulthard achieve superstardom, as well as the rise of professionalism, particularly in Victoria and Western Australia, where the code took hold during a series of gold rushes. Likewise when New Zealand experienced a gold rush, the sport arrived with a rapid influx of Australian miners. Now known as Australian rules or Australasian rules, the sport became the first football code to develop mass spectator appeal, attracting world record attendances for sports viewing and gaining a reputation as "the people's game". Australian rules football reached Queensland and New South Wales as early as 1866; the sport experienced a period of dominance in the former, and in the latter, several regions remain strongholds of Australian rules, such as the Riverina. However, like in New Zealand, it lost its position as the leading code of both colonies, largely due to the spread of rugby football with British migration, regional rivalries and the lack of strong local governing bodies. In the case of Sydney, denial of access to grounds, the influence of university headmasters from Britain who favoured rugby, and the loss of players to other codes inhibited the game's growth. Emergence of the VFL. In 1896, delegates from six of the wealthiest VFA clubs—Carlton, Essendon, Fitzroy, Geelong, Melbourne and South Melbourne—met to discuss the formation of a breakaway professional competition. Later joined by Collingwood and St Kilda, the clubs formed the Victorian Football League (VFL), which held its inaugural season in 1897. The VFL's popularity grew rapidly as it made several innovations, such as instituting a finals system, reducing teams from 20 to 18 players, and introducing the behind as a score. Richmond and University joined the VFL in 1908, and by 1925, with the addition of Hawthorn, Footscray and North Melbourne, it had become the preeminent league in the country and would take a leading role in many aspects of the sport. Interstate football and the World Wars. The time around the federation of the Australian colonies in 1901 saw Australian rules undergo a revival in New South Wales, New Zealand and Queensland. In 1903, both the Queensland Australian Football League and the NSW Australian Football Association were established, and in New Zealand, as it moved towards becoming a dominion, leagues were also established in the major cities. This renewed popularity helped encourage the formation of the Australasian Football Council, which in 1908 in Melbourne staged the first national interstate competition, the Jubilee Australasian Football Carnival, with teams representing each state and New Zealand. The game was also established early on in the new territories. In the new national capital Canberra both soccer and rugby had a head start, but following the first matches in 1911, Australian rules football in the Australian Capital Territory became a major participation sport. By 1981 it had become much neglected and quickly lagged behind the other football codes. Australian rules football in the Northern Territory began shortly after the outbreak of the war in 1916 with the first match in Darwin. The game went on to become the most popular sport in the Territory and build the highest participation rate for the sport nationally. Both World War I and World War II had a devastating effect on Australian football and on Australian sport in general. While scratch matches were played by Australian "diggers" in remote locations around the world, the game lost many of its great players to wartime service. Some clubs and competitions never fully recovered. Between 1914 and 1915, a proposed hybrid code of Australian football and rugby league, the predominant code of football in New South Wales and Queensland, was trialled without success. In Queensland, the state league went into recess for the duration of the war. VFL club University left the league and went into recess due to severe casualties. The WAFL lost two clubs and the SANFL was suspended for one year in 1916 due to heavy club losses. The Anzac Day match, the annual game between Essendon and Collingwood on Anzac Day, is one example of how the war continues to be remembered in the football community. The role of the Australian National Football Council (ANFC) was primarily to govern the game at a national level and to facilitate interstate representative and club competition. In 1968, the ANFC revived the Championship of Australia, a competition first held in 1888 between the premiers of the VFA and SAFA. Although clubs from other states were at times invited, the final was almost always between the premiers from the two strongest state competitions of the time—South Australia and Victoria—with Adelaide hosting most of the matches at the request of the SAFA/SANFL. The last match took place in 1976, with North Adelaide being the last non-Victorian winner in 1972. Between 1976 and 1987, the ANFC, and later the Australian Football Championships (AFC) ran a night series, which invited clubs and representative sides from around the country to participate in a knock-out tournament parallel to the premiership seasons, which Victorian sides still dominated. With the lack of international competition, state representative matches were regarded with great importance. Due in part to the VFL poaching talent from other states, Victoria dominated interstate matches for three-quarters of a century. State of Origin rules, introduced in 1977, stipulated that rather than representing the state of their adopted club, players would return to play for the state they were first recruited in. This instantly broke Victoria's stranglehold over state titles and Western Australia and South Australia began to win more of their games against Victoria. Both New South Wales and Tasmania scored surprise victories at home against Victoria in 1990. Towards a national league. The term "Barassi Line", named after VFL star Ron Barassi, was coined by scholar Ian Turner in 1978 to describe the "fictitious geographical barrier" separating the rugby-following parts of New South Wales and Queensland from the rest of the country, where Australian football reigned. It became a reference point for the expansion of Australian football and for establishing a national league. The way the game was played had changed dramatically due to innovative coaching tactics, with the phasing out of many of the game's kicking styles and the increasing use of handball; while presentation was influenced by television. In 1982, in a move that heralded big changes within the sport, one of the original VFL clubs, South Melbourne, relocated to Sydney and became known as the Sydney Swans. In the late 1980s, due to the poor financial standing of many of the Victorian clubs, and a similar situation existing in Western Australia in the sport, the VFL pursued a more national competition. Two more non-Victorian clubs, West Coast and Brisbane, joined the league in 1987 generating more than $8 million in license revenue for the Victorian clubs and increasing broadcast revenues which helped the Victorian clubs survive. In their early years, the Sydney and Brisbane clubs struggled both on and off-field because the substantial TV revenues they generated by playing on a Sunday went to the VFL. To protect these revenues the VFL granted significant draft concessions and financial aid to keep the expansion clubs competitive. The VFL changed its name to the Australian Football League (AFL) for the 1990 season, and over the next decade, three non-Victorian clubs gained entry: Adelaide (1991), Fremantle (1995) and the SANFL's Port Adelaide (1997), the only pre-existing club outside Victoria to join the league. In 2011 and 2012, respectively, two new non-Victorian clubs were added to the competition: Gold Coast and Greater Western Sydney. The AFL, currently with 18 member clubs, is the sport's elite competition and most powerful body. Following the emergence of the AFL, state leagues were quickly relegated to a second-tier status. The VFA merged with the former VFL reserves competition in 1998, adopting the VFL name. State of Origin also declined in importance, especially after an increasing number of player withdrawals. The AFL turned its focus to the annual International Rules Series against Ireland in 1998 before abolishing State of Origin the following year. State and territorial leagues still contest interstate matches, as do AFL Women players. Although a Tasmanian AFL bid is ongoing, the AFL's focus has been on expanding into markets outside Australian football's traditional heartlands in order to maximise its broadcast revenue. The AFL regularly schedules pre-season exhibition matches in all Australian states and territories as part of the Regional Challenge. The AFL signalled further attempts at expansion in the 2010s by hosting home-and-away matches in New Zealand, followed by China. Laws of the game. Field. Australian rules football playing fields have no fixed dimensions but at senior level are typically between long and wide wing-to-wing. The field, like the ball, is oval-shaped, and in Australia, cricket grounds are often used. No more than 18 players of each team (or, in AFL Women's, 16 players) are permitted to be on the field at any time. Up to four interchange (reserve) players may be swapped for those on the field at any time during the game. In Australian rules terminology, these players wait for substitution "on the bench"—an area with a row of seats on the sideline. Players must interchange through a designated interchange "gate" with strict penalties for having too many players from one team on the field. In addition, some leagues have each team designate one player as a substitute who can be used to make a single permanent exchange of players during a game. There is no offside rule nor are there set positions in the rules; unlike many other forms of football, players from both teams may disperse across the whole field before the start of play. However, a typical on-field structure consists of six forwards, six defenders or "backmen" and six midfielders, usually two wingmen, one centre and three followers, including a ruckman, ruck-rover and rover. Only four players from each team are allowed within the centre square () at every centre bounce, which occurs at the commencement of each quarter, and to restart the game after a goal is scored. There are also other rules pertaining to allowed player positions during set plays (that is, after a mark or free kick) and during kick-ins following the scoring of a behind. Match duration. A game consists of four quarters and a timekeeper officiates their duration. At the professional level, each quarter consists of 20 minutes of play, with the clock being stopped for instances such as scores, the ball going out of bounds or at the umpire's discretion, e.g. for serious injury. Lower grades of competition might employ shorter quarters of play. The umpire signals "time-off" to stop the clock for various reasons, such as the player in possession being tackled into stagnant play. Time resumes when the umpire signals "time-on" or when the ball is brought into play. Stoppages cause quarters to extend approximately 5–10 minutes beyond the 20 minutes of play. 6 minutes of rest is allowed before the second and fourth quarters, and 20 minutes of rest is allowed at "half-time". The official game clock is available only to the timekeeper(s), and is not displayed to the players, umpires or spectators. The only public knowledge of game time is when the timekeeper sounds a siren at the start and end of each quarter. Coaching staff may monitor the game time themselves and convey information to players via on-field trainers or substitute players. Broadcasters usually display an approximation of the official game time for television audiences, although some will now show the exact time remaining in a quarter. General play. Games are officiated by umpires. Before the game, the winner of a coin toss determines which directions the teams will play to begin. Australian football begins after the first siren, when the umpire bounces the ball on the ground (or throws it into the air if the condition of the ground is poor), and the two ruckmen (typically the tallest players from each team) battle for the ball in the air on its way back down. This is known as the "ball-up". Certain disputes during play may also be settled with a "ball-up" from the point of contention. If the ball is kicked or hit from a ball-up or boundary throw-in over the boundary line or into a behind post without the ball bouncing, a free kick is paid for out of bounds on the full. A free kick is also paid if the ball is deemed by the umpire to have been deliberately carried or directed out of bounds. If the ball travels out of bounds in any other circumstances (for example, contested play results in the ball being knocked out of bounds) a boundary umpire will stand with his back to the infield and return the ball into play with a "throw-in", a high backwards toss back into the field of play. The ball can be propelled in any direction by way of a foot, clenched fist (called a handball or "handpass") or open-hand tap but it cannot be thrown under any circumstances. Once a player takes possession of the ball he must dispose of it by either kicking or handballing it. Any other method of disposal is illegal and will result in a free kick to the opposing team. This is usually called "incorrect disposal", "dropping the ball" or "throwing". If the ball is not in the possession of one player it can be moved on with any part of the body. A player may run with the ball, but it must be bounced or touched on the ground at least once every . Opposition players may bump or tackle the player to obtain the ball and, when tackled, the player must dispose of the ball cleanly or risk being penalised for holding the ball unless the umpire rules no prior opportunity for disposal. The ball carrier may only be tackled between the shoulders and knees. If the opposition player forcefully contacts a player in the back while performing a tackle, the opposition player will be penalised for a push in the back. If the opposition tackles the player with possession below the knees (a "low tackle" or a "trip") or above the shoulders (a "high tackle"), the team with possession of the football gets a free kick. If a player takes possession of the ball that has travelled more than from another player's kick, by way of a catch, it is claimed as a "mark" (meaning that the game stops while he prepares to kick from the point at which he marked). Alternatively, he may choose to "play on" forfeiting the set shot in the hope of pressing an advantage for his team (rather than allowing the opposition to reposition while he prepares for the free kick). Once a player has chosen to play on, normal play resumes and the player who took the mark is again able to be tackled. There are different styles of kicking depending on how the ball is held in the hand. The most common style of kicking seen in today's game, principally because of its superior accuracy, is the drop punt, where the ball is dropped from the hands down, almost to the ground, to be kicked so that the ball rotates in a reverse end over end motion as it travels through the air. Other commonly used kicks are the torpedo punt (also known as the spiral, barrel, or screw punt), where the ball is held flatter at an angle across the body, which makes the ball spin around its long axis in the air, resulting in extra distance (similar to the traditional motion of an American football punt), and the checkside punt or "banana", kicked across the ball with the outside of the foot used to curve the ball (towards the right if kicked off the right foot) towards targets that are on an angle. There is also the "snap", which is almost the same as a checkside punt except that it is kicked off the inside of the foot and curves in the opposite direction. It is also possible to kick the ball so that it bounces along the ground. This is known as a "grubber". Grubbers can bounce in a straight line, or curve to the left or right. Apart from free kicks, marks or when the ball is in the possession of an umpire for a "ball up" or "throw in", the ball is always in dispute and any player from either side can take possession of the ball. Scoring. A "goal", worth 6 points, is scored when the football is propelled through the goal posts at any height (including above the height of the posts) by way of a kick from the attacking team. It may fly through "on the full" (without touching the ground) or bounce through, but must not have been touched, on the way, by any player from either team or a goalpost. A goal cannot be scored from the foot of an opposition (defending) player. A "behind", worth 1 point, is scored when the ball passes between a goal post and a behind post at any height, or if the ball hits a goal post, or if any player sends the ball between the goal posts by touching it with any part of the body other than a foot. A behind is also awarded to the attacking team if the ball touches any part of an opposition player, including a foot, before passing between the goal posts. When an opposition player deliberately scores a behind for the attacking team (generally as a last resort to ensure that a goal is not scored) this is termed a rushed behind. As of the 2009 AFL season, a free kick is awarded against any player who deliberately rushes a behind. The goal umpire signals a goal with two hands pointed forward at elbow height, or a behind with one hand. Both goal umpires then wave flags above their heads to communicate this information to the scorers. The team that has scored the most points at the end of play wins the game. If the scores are level on points at the end of play, then the game is a draw; extra time applies only during finals matches in some competitions. As an example of a score report, consider a match between and with the former as the home team. Sydney's score of 17 goals and 5 behinds equates to 107 points. Geelong's score of 10 goals and 17 behinds equates to a 77-point tally. Sydney wins the match by a margin of 30 points. Such a result would be written as: And spoken as: Additionally, it can be said that: The home team is typically listed first and the visiting side is listed second. The scoreline is written with respect to the home side. For example, won in successive weeks, once as the home side and once as the visiting side. These would be written out thus: A draw would be written as: Structure and competitions. The football season proper is from March to August (early autumn to late winter in Australia) with finals being held in September and October. In the tropics, the game is sometimes played in the wet season (October to March). The AFL is recognised by the Australian Sports Commission as being the National Sporting Organisation for Australian Football. There are also seven state/territory-based organisations in Australia, all of which are affiliated with the AFL. These state leagues hold annual semi-professional club competitions, with some also overseeing more than one league. Local semi-professional or amateur organisations and competitions are often affiliated to their state organisations. The AFL is the "de facto" world governing body for Australian football. There are also a number of affiliated organisations governing amateur clubs and competitions around the world. For almost all Australian football club competitions, the aim is to win the Premiership. The premiership is typically decided by a finals series. The teams that occupy the highest positions on the ladder after the home-and-away season play off in a "semi-knockout" finals series, culminating in a single Grand Final match to determine the premiers. Between four and eight teams contest a finals series, typically using the AFL final eight system or a variation of the McIntyre system. The team which finishes first on the ladder after the home-and-away season is referred to as a "minor premier", but this usually holds little stand-alone significance, other than receiving a better draw in the finals. Many metropolitan leagues have several tiered divisions, with promotion of the lower division premiers and relegation of the upper division's last placed team at the end of each year. At present, none of the top level national or state level leagues in Australia utilise this structure. Women and Australian football. The high level of interest shown by women in Australian football is considered unique among the world's football codes. It was the case in the 19th century, as it is in modern times, that women made up approximately half of total attendances at Australian football matches—a far greater proportion than, for example, the estimated 10 per cent of women that comprise British soccer crowds. This has been attributed in part to the egalitarian character of Australian football's early years in public parks where women could mingle freely and support the game in various ways. In terms of participation, there are occasional 19th-century references to women playing the sport, but it was not until the 1910s that the first organised women's teams and competitions appeared. Women's state leagues emerged in the 1980s, and in 2013, the AFL announced plans to establish a nationally televised women's competition. Amidst a surge in viewing interest and participation in women's football, the AFL pushed the founding date of the competition, named AFL Women's, to 2017. Eight AFL clubs won licences to field sides in its inaugural season. By the seventh season, which began in August 2022, all 18 clubs fielded a women's side. Variations and related sports. Many related games have emerged from Australian football, mainly with variations of contact to encourage greater participation. These include Auskick (played by children aged between 5 and 12), kick-to-kick (and its variants end-to-end footy and marks up), rec footy, 9-a-side footy, masters Australian football, handball and longest-kick competitions. Players outside Australia sometimes engage in related games adapted to available fields, like metro footy (played on gridiron fields) and Samoa rules (played on rugby fields). One such prominent example in use since 2018 is AFLX, a shortened variation of the game with seven players a side, played on a soccer-sized pitch. International rules football. The similarities between Australian football and the Irish sport of Gaelic football have allowed for the creation of a hybrid code known as international rules football. The first international rules matches were contested in Ireland during the 1967 Australian Football World Tour. Since then, various sets of compromise rules have been trialed, and in 1984 the International Rules Series commenced with national representative sides selected by Australia's state leagues (later by the AFL) and the Gaelic Athletic Association (GAA). The competition became an annual event in 1998, but was postponed indefinitely in 2007 when the GAA pulled out due to Australia's severe and aggressive style of play. It resumed in Australia in 2008 under new rules to protect the player with the ball. Global reach. Australian rules football was played outside Australasia as early as 1888 when Australians studying at Edinburgh University and London University formed teams and competed in London. By the early 20th century, the game had spread with the Australian diaspora to South Africa, the United States and other parts of the Anglosphere; however this growth went into rapid decline following World War I. After World War II, the sport experienced growth in the Pacific region, particularly in Papua New Guinea and Nauru, where Australian football is now the national sport. Today, the sport is played at an amateur level in various countries throughout the world. 23 countries have participated in the International Cup and 9 countries have participated in the AFL Europe Championship with both competitions prohibiting Australian players. Over 20 countries have either affiliation or working agreements with the AFL. There have been many VFL/AFL players who were born outside Australia, an increasing number of which have been recruited through initiatives and, more recently, international scholarship programs. Many of these players have been Irish, as interest in recruiting talented Gaelic footballers dates back to the start of the Irish experiment in the 1960s. Irish players have since become not just starters for their clubs but also leading their competitions (Jim Stynes) and winning premierships (Tadhg Kennelly, Ailish Considine). Most of the current amateur clubs and leagues in existence have developed since the 1980s, when leagues began to be established in North America, Europe and Asia. The sport developed a cult following in the United States when matches were broadcast on the fledgling ESPN network in the 1980s. As the size of the Australian diaspora has increased, so has the number of clubs outside Australia. This expansion has been further aided by multiculturalism and assisted by exhibition matches as well as exposure generated through players who have converted to and from other football codes. In Papua New Guinea, New Zealand, South Africa, Canada, and the United States there are many thousands of players. A fan of the sport since attending school in Geelong, King Charles is the Patron of AFL Europe. In 2013, participation across AFL Europe's 21 member nations was more than 5,000 players, the majority of which are European nationals rather than Australian expats. The sport also has a growing presence in India. The AFL became the de facto governing body when it pushed for the closure of the International Australian Football Council in 2002. The International Cup, held triennially since 2002, is the highest level of international competition. Although Australian rules football has not yet been a full sport at the Olympic Games or Commonwealth Games, when Melbourne hosted the 1956 Summer Olympics, which included the MCG being the main stadium, Australian rules football was chosen as the native sport to be demonstrated as per International Olympic Committee rules. On 7 December, the sport was demonstrated as an exhibition match at the MCG between a team of VFL and VFA amateurs and a team of VAFA amateurs (professionals were excluded due to the Olympics' strict amateurism policy at the time). The Duke of Edinburgh was among the spectators for the match, which the VAFA won by 12.9 (81) to 8.7 (55). In addition, when Brisbane hosted the 1982 Commonwealth Games, the sport was also demonstrated at the Gabba with a rematch on 6 October of that year's VFL Grand Final with Richmond winning by 28.16 (184) to Carlton's 26.10 (166). Cultural impact and popularity. Australian football is a sport rich in tradition and Australian cultural references, especially surrounding the rituals of gameday for players, officials and supporters. Australian football has attracted more overall interest among Australians than any other football code, and, when compared with all sports throughout the nation, has consistently ranked first in the winter reports, and third behind cricket and swimming in summer. Over 1,057,572 fans were paying members of AFL clubs in 2019. The 2021 AFL Grand Final was the year's most-watched television broadcast in Australia, with an in-home audience of up to 4.11 million. In 2019, there were 1,716,276 registered participants in Australia including 586,422 females (34 per cent of the overall total) and more than 177,000 registered outside Australia including 79,000 females (45 per cent of the overall total). In the arts and popular culture. Australian football has inspired many literary works, from poems by C. J. Dennis and Peter Goldsworthy, to the fiction of Frank Hardy and Kerry Greenwood. Historians Manning Clarke and Geoffrey Blainey have also written extensively on the sport. Slang within Australian football has impacted Australian English more broadly, with a number of expressions taking on new meanings in non-sporting contexts, e.g., to "get a guernsey" is to gain recognition or approval, while "shirt-fronting" someone is to accost them. In 1889, Australian impressionist painter Arthur Streeton captured football games "en plein air" for the 9 by 5 Impression Exhibition, titling one work "The National Game". Paintings by Sidney Nolan ("Footballer", 1946) and John Brack ("Three of the Players", 1953) helped to establish Australian football as a serious subject for modernists, and many Aboriginal artists have explored the game, often fusing it with the mythology of their region. In cartooning, WEG's VFL/AFL premiership posters—inaugurated in 1954—have achieved iconic status among Australian football fans. Australian football statues can be found throughout the country, some based on famous photographs, among them Haydn Bunton Sr.'s leap, Jack Dyer's charge and Nicky Winmar lifting his jumper. In the 1980s, a group of postmodern architects based in Melbourne began incorporating references to Australian football into their buildings, an example being Building 8 by Edmond and Corrigan. Dance sequences based on Australian football feature heavily in Robert Helpmann's 1964 ballet "The Display", his first and most famous work for the Australian Ballet. The game has also inspired well-known plays such as "And the Big Men Fly" (1963) by Alan Hopgood and David Williamson's "The Club" (1977), which was adapted into a 1980 film, directed by Bruce Beresford. Mike Brady's 1979 hit "Up There Cazaly" is considered an Australian football anthem, and references to the sport can be found in works by popular musicians, from singer-songwriter Paul Kelly to the alternative rock band TISM. Many Australian football video games have been released, most notably the AFL series. Australian Football Hall of Fame. For the centenary of the VFL/AFL in 1996, the Australian Football Hall of Fame was established. That year, 136 significant figures across the various competitions were inducted into the Hall of Fame. An additional 115 inductees have been added since the creation of the Hall of Fame, resulting in a total number of 251 inductees. In addition to the Hall of Fame, select members are chosen to receive the elite "Legend" status. Due to restrictions limiting the number of Legend status players to 10% of the total number of Hall of Fame inductees, there are currently 25 players with the status in the Hall of Fame.
2405
Aon (company)
Aon PLC () is a global professional services and management consulting firm that offers a range of risk-mitigation products, including commercial risk, investment, wealth, health, human capital, and reinsurance solutions. The firm also provides data and analytics services, strategy consulting through Aon Inpoint and investment banking advisory through Aon Securities. Aon has approximately 50,000 employees across 120 countries. Founded in Chicago by Patrick Ryan, Aon was created in 1982 when the Ryan Insurance Group merged with the Combined Insurance Company of America. In 1987, that company was renamed Aon from "aon", a Gaelic word meaning "one". The company is globally headquartered in London with its North America operations based in Chicago at the Aon Center. Aon is listed on the New York Stock Exchange under AON with a market cap of $65 billion in April 2023. History. W. Clement Stone's mother bought a small Detroit insurance agency, and in 1918 brought her son into the business. Mr. Stone sold low-cost, low-benefit accident insurance, underwriting and issuing policies on-site. The next year he founded his own agency, the Combined Registry Co. As the Great Depression began, Stone reduced his workforce and improved training. Forced by his son's respiratory illness to winter in the South, Stone moved to Arkansas and Texas. In 1939 he bought American Casualty Insurance Co. of Dallas, Texas. It was consolidated with other purchases as the Combined Insurance Co. of America in 1947. The company continued through the 1950s and 1960s, continuing to sell health and accident policies. In the 1970s, Combined expanded overseas despite being hit hard by the recession. In 1982, after 10 years of stagnation under Clement Stone Jr., the elder Stone, then 79, resumed control until the completion of a merger with Ryan Insurance Co. allowed him to transfer control to Patrick Ryan. Ryan, the son of a Ford dealer in Wisconsin and a graduate of Northwestern University, had started his company as an auto credit insurer in 1964. In 1976, the company bought the insurance brokerage units of the Esmark conglomerate. Ryan focused on insurance brokering and added more upscale insurance products. He also trimmed staff and took other cost-cutting measures, and in 1987 he changed Combined's name to Aon. In 1992, he bought Dutch insurance broker Hudig-Langeveldt. In 1995, the company sold its remaining direct life insurance holdings to General Electric to focus on consulting. Aon built a global presence through purchases. In 1997, it bought The Minet Group, as well as insurance brokerage Alexander & Alexander Services, Inc. in a deal that made Aon (temporarily) the largest insurance broker worldwide. The firm made no US buys in 1998, but doubled its employee base with purchases including Spain's largest retail insurance broker, Gil y Carvajal, and the formation of Aon Korea. Responding to industry demands, Aon announced its new fee disclosure policy in 1999, and the company reorganised to focus on buying personal line insurance firms and to integrate its acquisitions. That year it bought Nikols Sedgwick Group, an Italian insurance firm, and formed RiskAttack (with Zurich US), a risk analysis and financial management concern aimed at technology companies. The cost of integrating its numerous purchases, however, hammered profits in 1999. Despite its troubles, in 2000 Aon bought Reliance Group's accident and health insurance business, as well as Actuarial Sciences Associates, a compensation and employee benefits consulting company. Later in that year, however, the company decided to cut 6% of its workforce as part of a restructuring effort. In 2003, the company saw revenues increase primarily because of rate hikes in the insurance industry. Also that year, Endurance Specialty, a Bermuda-based underwriting operation that Aon helped to establish in November 2001 along with other investors, went public. The next year Aon sold most of its holdings in Endurance. In the late 2007, Aon announced the divestiture of its underwriting business. With this move, the firm sold off its two major underwriting subsidiaries: Combined Insurance Company of America (acquired by ACE Limited for $2.4 billion) and Sterling Life Insurance Company (purchased by Munich Re Group for $352 million). The low margin and capital-intensive nature of the underwriting industry was the primary reason for the firm's decision to divest. This growth strategy manifested in November 2008 when Aon announced it had acquired reinsurance intermediary and capital advisor Benfield Group Limited for $1.75 billion. The acquisition amplified the firm's broking capabilities, positioning Aon one of the largest players in the reinsurance brokerage industry. In 2010, Aon made its most significant acquisition to date with the purchase of Hewitt Associates for $4.9 billion. Aside from drastically boosting Aon's human resources consulting capacity and entering the firm into the business process outsourcing industry, the move added 23,000 colleagues and more than $3 billion in revenue. In January 2012, Aon announced that its headquarters would be moved to London, although North American operations and jobs remained in Chicago. On 10 February 2017, Aon announced that it was selling its employee benefits outsourcing business to private equity firm The Blackstone Group for US$4.8 billion (£3.8 billion). In February 2020, Aon named Eric Andersen as president of Aon after co-president Michael O'Connor departed the company to pursue new opportunities. He will be reporting to Greg Case, the firm's CEO. In June 2020, Aon announced it was planning to repay the temporary 20% pay cut from 70% of employees that was published in a statement in April 2020 regarding the COVID-19 pandemic. On 30 June 2020, Aon announced it would repay staff in full, plus 5% of the withheld amount. In June 2020, Willis Towers Watson called its shareholders to two meetings to discuss its acquisition with Aon for August 26, 2020. It was revealed that the US Department of Justice has requested more information on the deal under antitrust rules. September 11 attacks. Aon's New York offices were on the 92nd and 98th–105th floors of the South Tower of the World Trade Center at the time of the September 11 attacks. When the North Tower was struck by American Airlines Flight 11 at 8:46 a.m., an evacuation of Aon's offices was quickly initiated by executive Eric Eisenberg, and 924 of the estimated 1,100 Aon employees present at the time managed to get below the 77th floor before United Airlines Flight 175 crashed between Floors 77 and 85 at 9:03 a.m. Many, however, did not manage to get beneath in the 17 minutes they had between the two impacts. As a result, 176 employees of Aon were killed in the crash or died in the fires or from smoke inhalation. At 9:59 a.m., the tower finally collapsed, killing any survivors still within, including Eisenberg and Kevin Cosgrove. Spitzer investigation. In 2004–2005, Aon, along with other brokers including Marsh & McLennan and Willis, fell under regulatory investigation under New York Attorney General Eliot Spitzer and other state attorneys general. At issue was the practice of insurance companies' payments to brokers (known as contingent commissions). The payments were thought to bring a conflict of interest, swaying broker decisions on behalf of carriers, rather than customers. In the spring of 2005, without acknowledging any wrongdoing, Aon agreed to a $190 million settlement, payable over 30 months. UK regulatory breach. In January 2009, Aon was fined £5.69 million in the UK by the Financial Services Authority, who stated that the fine related to the company's inadequate bribery and corruption controls, claiming that between 14 January 2005 and 30 September 2007 Aon had failed to properly assess the risks involved in its dealings with overseas firms and individuals. The Authority did not find that any money had actually made its way to illegal organisations. Aon qualified for a 30% discount on the fine as a result of its cooperation with the investigation. Aon said its conduct was not deliberate, adding it had since "significantly strengthened and enhanced its controls around the usage of third parties". US Foreign Corrupt Practices Act violations. In December 2011, Aon Corporation paid a $16.26 million penalty to the U.S. Securities and Exchange Commission and the U.S. Department of Justice for violations of the US Foreign Corrupt Practices Act. According to the Securities and Exchange Commission, Aon's subsidiaries made improper payments of over $3.6 million to government officials and third-party facilitators in Costa Rica, Egypt, Vietnam, Indonesia, the United Arab Emirates, Myanmar and Bangladesh, between 1983 and 2007, to obtain and retain insurance contracts. Major acquisitions. On 5 January 2007, Aon announced that its Aon Affinity group had acquired the WedSafe Wedding Insurance program. On 22 August 2008, Aon announced that it had acquired London-based Benfield Group. The acquiring price was US$1.75 billion or £935 million, with US$170 million of debt. On 5 March 2010, Hewitt Associates announced that it acquired Senior Educators Ltd. The acquisition offers companies a new way to address retiree medical insurance commitments. On 12 July 2010, Aon announced that it had agreed to buy Lincolnshire, Illinois-based Hewitt Associates for $4.9 billion in cash and stock. On 7 April 2011, Aon announced that it had acquired Johannesburg, South Africa-based Glenrand MIB. Financial terms were not disclosed. On 19 July 2011, Aon announced that it bought Westfield Financial Corp., the owner of insurance-industry consulting firm Ward Financial Group, from Ohio Farmers Insurance Co. Financial terms were not disclosed. On 22 October 2012, Aon announced that it agreed to buy OmniPoint, Inc, a Workday consulting firm. Financial terms were not disclosed. On 16 June 2014, Aon announced that it agreed to buy National Flood Services, Inc., a large processor of flood insurance, from Stoneriver Group, L.P. On 31 October 2016, Aon's Aon Risk Solutions completed acquisition of Stroz Friedberg LLC, a specialised risk management firm focusing on cybersecurity. On 14 November 2016, Aon acquired CoCubes an online Indian Assessment firm, facilitating hiring of entry-level engineering graduates. On 10 February 2017, Aon plc agreed to sell its human resources outsourcing platform for US$4.8 billion (£3.8 billion) to Blackstone Group L.P. (BX.N), creating a new company called Alight Solutions. In September 2017, Aon announced its intent to purchase real estate investment management firm The Townsend Group from Colony NorthStar for $475 million, expanding Aon's property investment management portfolio. On 9 March 2020, Aon announced its merger with Willis Towers Watson for nearly $30 billion in an all-stock deal that creates the world's largest insurance broker. As of 21 May 2020, Willis board was under probe over merger agreement with Aon. The deal was called off in July 2021. Operations. Manchester United. On 3 June 2009, it was reported that Aon had signed a four-year shirt sponsorship deal with English football giant Manchester United. On 1 June 2010, Aon replaced American insurance company AIG as the principal sponsor of the club. The Aon logo was prominently displayed on the front of the club's shirts until the 2014/2015 season when Chevrolet replaced them. The deal was said to be worth £80 million over four years, replacing United's deal with AIG as the most lucrative shirt deal in history at the time. In April 2013, Aon signed a new eight-year deal with Manchester United to rename their training ground as the Aon Training Complex and sponsor the club's training kits, reportedly worth £180 million to the club. External links. Official website
2406
Alban Berg
Alban Maria Johannes Berg ( , ; 9 February 1885 – 24 December 1935) was an Austrian composer of the Second Viennese School. His compositional style combined Romantic lyricism with the twelve-tone technique. Although he left a relatively small "oeuvre", he is remembered as one of the most important composers of the 20th century for his expressive style encompassing "entire worlds of emotion and structure". Berg was born and lived in Vienna. He began to compose only at the age of fifteen. He studied counterpoint, music theory and harmony with Arnold Schoenberg between 1904 and 1911, and adopted his principles of "developing variation" and the twelve-tone technique. Berg's major works include the operas "Wozzeck" (1924) and "Lulu" (1935, finished posthumously), the chamber pieces "Lyric Suite" and Chamber Concerto, as well as a Violin Concerto. He also composed a number of songs ("lieder"). He is said to have brought more "human values" to the twelve-tone system, his works seen as more "emotional" than Schoenberg's. His music had a surface glamour that won him admirers when Schoenberg himself had few. Berg died from sepsis in 1935. Life and career. Early life. Berg was born in Vienna, the third of four children of Johanna and Konrad Berg. His father ran a successful export business, and the family owned several estates in Vienna and the countryside. The family's financial situation turned to the worse after the death of Konrad Berg in 1900, and it particularly affected young Berg, who had to repeat both his sixth and seventh grade to pass the exams. One of his closest lifelong friends and earliest biographer (under the pseudonym Hermann Herrenried), architect Hermann Watznauer, became a father figure (partly at Konrad's request), being ten years Berg's senior. Berg wrote him letters as long as thirty pages, often in florid, dramatic prose with idiosyncratic punctuation. Berg considered a career as a writer several times; he was more interested in literature than music as a child and did not begin to compose until he was fifteen, when he started to teach himself music, although he did take piano lessons from his sister's governess. With Marie Scheuchl, a maid in the family estate of Berghof in Carinthia and fifteen years his senior, he fathered a daughter, Albine, born 4 December 1902. With little prior music education, he began studying counterpoint, music theory, and harmony under Arnold Schoenberg in October 1904. By 1906 he was studying music full-time; by 1907 he began composition lessons. His student compositions included five drafts for piano sonatas. He also wrote songs, including his "Seven Early Songs" ("Sieben frühe Lieder"), three of which were Berg's first publicly performed work in a concert that featured the music of Schoenberg's pupils in Vienna that year. The early sketches eventually culminated in Piano Sonata, Op. 1 (1907–1908); it is one of the most formidable "first" works ever written. Berg studied with Schoenberg for six years until 1911. Among Schoenberg's teaching was the idea that the unity of a musical composition depends upon all its aspects being derived from a single basic idea; this idea was later known as "developing variation". Berg passed this on to his students, one of whom, Theodor W. Adorno, stated: "The main principle he conveyed was that of variation: everything was supposed to develop out of something else and yet be intrinsically different". The Piano Sonata is an example—the whole composition is derived from the work's opening quartal gesture and its opening phrase. Innovation. Berg was a part of Vienna's cultural elite during the heady "fin de siècle" period. His circle included the musicians Alexander von Zemlinsky and Franz Schreker, the painter Gustav Klimt, the writer and satirist Karl Kraus, the architect Adolf Loos, and the poet Peter Altenberg. In 1906 Berg met the singer (1885–1976), daughter of a wealthy family (rumoured to be in fact the illegitimate daughter of Emperor Franz Joseph I from his liaison with Anna Nahowski). Despite the outward hostility of her family, the couple married on 3 May 1911. In 1913 two of Berg's "Altenberg Lieder" (1912) were premièred in Vienna, conducted by Schoenberg in the infamous "Skandalkonzert". Settings of aphoristic poetic utterances, the songs are accompanied by a very large orchestra. The performance caused a riot, and had to be halted. Berg effectively withdrew the work, and it was not performed in full until 1952. The full score remained unpublished until 1966. From 1915 to 1918 Berg served in the Austro-Hungarian Army. During a period of leave in 1917 he accelerated work on his first opera, "Wozzeck". After the end of World War I, he settled again in Vienna, where he taught private pupils. He also helped Schoenberg run his Society for Private Musical Performances, which sought to create the ideal environment for the exploration and appreciation of unfamiliar new music by means of open rehearsals, repeat performances, and the exclusion of professional critics. Berg had a particular interest in the number 23, using it to structure several works. Various suggestions have been made as to the reason for this interest: that he took it from the biorhythms theory of Wilhelm Fliess, in which a 23-day cycle is considered significant, or because he first suffered an asthma attack on the 23rd of the month. Success of "Wozzeck" and inception of "Lulu" (1924–29). In 1924 three excerpts from "Wozzeck" were performed, which brought Berg his first public success. The opera, which Berg completed in 1922, was first performed on 14 December 1925, when Erich Kleiber conducted the first performance in Berlin. Today, "Wozzeck" is seen as one of the century's most important works. Berg made a start on his second opera, the three-act "Lulu", in 1928 but interrupted the work in 1929 for the concert aria "Der Wein" which he completed that summer. "Der Wein" presaged "Lulu" in a number of ways, including vocal style, orchestration, design and text. Other well-known Berg compositions include the "Lyric Suite" (1926), which was later shown to employ elaborate cyphers to document a secret love affair; the post-Mahlerian Three Pieces for Orchestra (completed in 1915 but not performed until after "Wozzeck"); and the Chamber Concerto ("Kammerkonzert", 1923–25) for violin, piano, and 13 wind instruments: this latter is written so conscientiously that Pierre Boulez has called it "Berg's strictest composition" and it, too, is permeated by cyphers and posthumously disclosed hidden programs. It was at this time he began exhibiting tone clusters in his works after meeting with American avant-garde composer Henry Cowell, whom he would eventually form a lifelong friendship with. Final years (1930–35). Life for the musical world was becoming increasingly difficult in the 1930s both in Vienna and Germany due to the rising tide of antisemitism and the Nazi cultural ideology that denounced modernity. Even to have an association with someone who was Jewish could lead to denunciation, and Berg's "crime" was to have studied with the Jewish composer Arnold Schoenberg. Berg found that opportunities for his work to be performed in Germany were becoming rare, and eventually his music was proscribed and placed on the list of degenerate music. In 1932 Berg and his wife acquired an isolated lodge, the "Waldhaus" on the southern shore of the Wörthersee, near Schiefling am See in Carinthia, where he was able to work in seclusion, mainly on "Lulu" and the Violin Concerto. At the end of 1934, Berg became involved in the political intrigues around finding a replacement for Clemens Krauss as director of the Vienna State Opera. As more of the performances of his work in Germany were cancelled by the Nazis, who had come to power in early 1933, he needed to ensure the new director would be an advocate for modernist music. Originally, the premiere of "Lulu" had been planned for the Berlin State Opera, where Erich Kleiber continued to champion his music and had conducted the premiere of "Wozzeck" in 1925, but now this was looking increasingly uncertain, and "Lulu" was rejected by the Berlin authorities in the spring of 1934. Kleiber's production of the "Lulu" symphonic suite on 30 November 1934 in Berlin was also the occasion of his resignation in protest at the extent of conflation of culture with politics. Even in Vienna, the opportunities for the Vienna School of musicians was dwindling. Berg had interrupted the orchestration of "Lulu" because of an unexpected (and financially much-needed) commission from the Russian-American violinist Louis Krasner for a Violin Concerto (1935). This profoundly elegiac work, composed at unaccustomed speed and posthumously premiered, has become Berg's best-known and most-beloved composition. Like much of his mature work, it employs an idiosyncratic adaptation of Schoenberg's "dodecaphonic" or twelve-tone technique, that enables the composer to produce passages openly evoking tonality, including quotations from historical tonal music, such as a Bach chorale and a Carinthian folk song. The Violin Concerto was dedicated "to the memory of an Angel", Manon Gropius, the deceased daughter of architect Walter Gropius and Alma Mahler. Death. Berg died aged 50 in Vienna, on Christmas Eve 1935, from blood poisoning apparently caused by a furuncle on his back, induced by an insect sting that occurred in November. He was buried at the Hietzing Cemetery in Vienna. Before he died, Berg had completed the orchestration of only the first two of the three acts of "Lulu". The completed acts were successfully premièred in Zürich in 1937. For personal reasons Helene Berg subsequently imposed a ban on any attempt to "complete" the final act, which Berg had in fact completed in short score. An orchestration was therefore commissioned in secret from Friedrich Cerha and premièred in Paris (under Pierre Boulez) only in 1979, soon after Helene Berg's own death. Legacy. Berg is remembered as one of the most important composers of the 20th century and the most widely performed opera composer among the Second Viennese School. He is said to have brought more "human values" to the twelve-tone system, his works seen as more "emotional" than Schoenberg's. Critically, he is seen as having preserved the Viennese tradition in his music. Berg scholar Douglas Jarman writes in the "New Grove Dictionary of Music and Musicians" that "[as] the 20th century closed, the 'backward-looking' Berg suddenly came as [George] Perle remarked, to look like its most forward-looking composer." The Alban Berg Foundation, founded by the composer's widow in 1969, cultivates the memory and works of the composer, and awards scholarships. The Alban Berg Monument, situated next to the Vienna State Opera and unveiled in 2016, was funded by the Foundation. Alban Berg Quartett was a string quartet named after him, active from 1971 until 2008. The asteroid 4528 Berg is named after him (1983). Major compositions. Piano Chamber Orchestral Vocal Operas
2408
Analytical chemistry
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History. Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods. Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis. Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests. There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test. Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis. Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis. The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis. Titration involves the addition of a reactant to a solution being analyzed until some equivalence point is reached. Often the amount of material in the solution being analyzed may be determined. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator. There are many other types of titrations, for example, potentiometric titrations. These titrations may use different types of indicators to reach some equivalence point. Instrumental methods. Spectroscopy. Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry. Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis. Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Thermal analysis. Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation. Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Hybrid techniques. Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy. The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip. Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors. Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(formula_5): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let formula_8 be a function with formula_9 variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in formula_8: Standards. Standard curve. A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. Internal standards. Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution. Standard addition. The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise. One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise. Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where "k"B is Boltzmann's constant, "T" is the temperature, "R" is the resistance, and formula_13 is the bandwidth of the frequency formula_14. Shot noise. Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where "e" is the elementary charge and "I" is the average current. Shot noise is white noise. Flicker noise. Flicker noise is electronic noise with a 1/"ƒ" frequency spectrum; as "f" increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. Environmental noise. Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments. Noise reduction. Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications. Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (µTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
2411
A cappella
, , , ) music is a performance by a singer or a singing group without instrumental accompaniment, or a piece intended to be performed in this way. The term "a cappella" was originally intended to differentiate between Renaissance polyphony and Baroque concertato musical styles. In the 19th century, a renewed interest in Renaissance polyphony, coupled with an ignorance of the fact that vocal parts were often doubled by instrumentalists, led to the term coming to mean unaccompanied vocal music. The term is also used, rarely, as a synonym for "alla breve". Early history. A cappella could be as old as humanity itself. Research suggests that singing and vocables may have been what early humans used to communicate before the invention of language. The earliest piece of sheet music is thought to have originated from times as early as 2000 B.C. while the earliest that has survived in its entirety is from the first century A.D.: a piece from Greece called the Seikilos epitaph. Religious origins. A cappella music was originally used in religious music, especially church music as well as anasheed and zemirot. Gregorian chant is an example of a cappella singing, as is the majority of secular vocal music from the Renaissance. The madrigal, up until its development in the early Baroque into an instrumentally accompanied form, is also usually in a cappella form. The Psalms note that some early songs were accompanied by string instruments, though Jewish and Early Christian music was largely a cappella; the use of instruments has subsequently increased within both of these religions as well as in Islam. Christian. The polyphony of Christian a cappella music began to develop in Europe around the late 15th century AD, with compositions by Josquin des Prez. The early a cappella polyphonies may have had an accompanying instrument, although this instrument would merely double the singers' parts and was not independent. By the 16th century, a cappella polyphony had further developed, but gradually, the cantata began to take the place of a cappella forms. Sixteenth-century a cappella polyphony, nonetheless, continued to influence church composers throughout this period and to the present day. Recent evidence has shown that some of the early pieces by Palestrina, such as those written for the Sistine Chapel, were intended to be accompanied by an organ "doubling" some or all of the voices. Such is seen in the life of Palestrina becoming a major influence on Bach, most notably in the "Mass in B Minor". Other composers that utilized the a cappella style, if only for the occasional piece, were Claudio Monteverdi and his masterpiece, "Lagrime d'amante al sepolcro dell'amata" (A lover's tears at his beloved's grave), which was composed in 1610, and Andrea Gabrieli when upon his death many choral pieces were discovered, one of which was in the unaccompanied style. Learning from the preceding two composers, Heinrich Schütz utilized the a cappella style in numerous pieces, chief among these were the pieces in the oratorio style, which were traditionally performed during the Easter week and dealt with the religious subject matter of that week, such as Christ's suffering and the Passion. Five of Schutz's "Historien" were Easter pieces, and of these the latter three, which dealt with the passion from three different viewpoints, those of Matthew, Luke and John, were all done a cappella style. This was a near requirement for this type of piece, and the parts of the crowd were sung while the solo parts which were the quoted parts from either Christ or the authors were performed in a plainchant. Byzantine Rite. In the Byzantine Rite of the Eastern Orthodox Church and the Eastern Catholic Churches, the music performed in the liturgies is exclusively sung without instrumental accompaniment. Bishop Kallistos Ware says, "The service is sung, even though there may be no choir... In the Orthodox Church today, as in the early Church, singing is unaccompanied and instrumental music is not found." This a cappella behavior arises from strict interpretation of Psalms 150, which states, "Let every thing that hath breath praise the Lord. Praise ye the Lord." In keeping with this philosophy, early Russian "musika" which started appearing in the late 17th century, in what was known as "khorovïye kontsertï" (choral concertos) made a cappella adaptations of Venetian-styled pieces, such as the treatise, "Grammatika musikiyskaya" (1675), by Nikolai Diletsky. Divine Liturgies and Western Rite masses composed by famous composers such as Peter Tchaikovsky, Sergei Rachmaninoff, Alexander Arkhangelsky, and Mykola Leontovych are fine examples of this. Opposition to instruments in worship. Present-day Christian religious bodies known for conducting their worship services without musical accompaniment include many Oriental Orthodox Churches (such as the Coptic Orthodox Church), many Anabaptist communities (including Old Order Anabaptist groups—such as the Amish, Old German Baptist Brethren, Old Order Mennonites, as well as Conservative Anabaptist groups—such as the Dunkard Brethren Church and Conservative Mennonites), some Presbyterian churches devoted to the regulative principle of worship, Old Regular Baptists, Primitive Baptists, Plymouth Brethren, Churches of Christ, Church of God (Guthrie, Oklahoma), the Reformed Free Methodists, Doukhobors, and the Byzantine Rite of Eastern Christianity. Certain high church services and other musical events in liturgical churches (such as the Roman Catholic Mass and the Lutheran Divine Service) may be a cappella, a practice remaining from apostolic times. Many Mennonites also conduct some or all of their services without instruments. Sacred Harp, a type of folk music, is an a cappella style of religious singing with shape notes, usually sung at singing conventions. Opponents of musical instruments in the Christian worship believe that such opposition is supported by the Christian scriptures and Church history. The scriptures typically referenced are Matthew 26:30; Acts 16:25; Romans 15:9; 1 Corinthians 14:15; Ephesians 5:19; Colossians 3:16; Hebrews 2:12, 13:15 and James 5:13, which show examples and exhortations for Christians to sing. There is no reference to instrumental music in early church worship in the New Testament, or in the worship of churches for the first six centuries. Several reasons have been posited throughout church history for the absence of instrumental music in church worship. Christians who believe in a cappella music today believe that in the Israelite worship assembly during Temple worship only the Priests of Levi sang, played, and offered animal sacrifices, whereas in the church era, all Christians are commanded to sing praises to God. They believe that if God wanted instrumental music in New Testament worship, He would have commanded not just singing, but singing and playing like he did in the Hebrew scriptures. Instruments have divided Christendom since their introduction into worship. They were considered a Roman Catholic innovation, not widely practiced until the 18th century, and were opposed vigorously in worship by a number of Protestant Reformers, including Martin Luther (1483–1546), Ulrich Zwingli, John Calvin (1509–1564) and John Wesley (1703–1791). Alexander Campbell referred to the use of an instrument in worship as "a cow bell in a concert". In Sir Walter Scott's "The Heart of Midlothian", the heroine, Jeanie Deans, a Scottish Presbyterian, writes to her father about the church situation she has found in England (bold added): Acceptance of instruments in worship. Those who do not adhere to the regulative principle of interpreting Christian scripture, believe that limiting praise to the unaccompanied chant of the early church is not commanded in scripture, and that churches in any age are free to offer their songs with or without musical instruments. Those who subscribe to this interpretation believe that since the Christian scriptures never counter instrumental language with any negative judgment on instruments, opposition to instruments instead comes from an interpretation of history. There is no written opposition to musical instruments in any setting in the first century and a half of Christian churches (33–180 AD). The use of instruments for Christian worship during this period is also undocumented. Toward the end of the 2nd century, Christians began condemning the instruments themselves. Those who oppose instruments today believe these Church Fathers had a better understanding of God's desire for the church, but there are significant differences between the teachings of these Church Fathers and Christian opposition to instruments today. Since "a cappella" singing brought a new polyphony (more than one note at a time) with instrumental accompaniment, it is not surprising that Protestant reformers who opposed the instruments (such as Calvin and Zwingli) also opposed the polyphony. While Zwingli was destroying organs in Switzerland – Luther called him a fanatic – the Church of England was burning books of polyphony. Some Holiness Churches such as the Free Methodist Church opposed the use of musical instruments in church worship until the mid-20th century. The Free Methodist Church allowed for local church decision on the use of either an organ or piano in the 1943 Conference before lifting the ban entirely in 1955. The Reformed Free Methodist Church and Evangelical Wesleyan Church were formed as a result of a schism with the Free Methodist Church, with the former retaining a cappella worship and the latter retaining the rule limiting the number of instruments in the church to the piano and organ. Jewish. While worship in the Temple in Jerusalem included musical instruments, traditional Jewish religious services in the Synagogue, both before and after the last destruction of the Temple, did not include musical instruments given the practice of scriptural cantillation. The use of musical instruments is traditionally forbidden on the Sabbath out of concern that players would be tempted to repair (or tune) their instruments, which is forbidden on those days. (This prohibition has been relaxed in many Reform and some Conservative congregations.) Similarly, when Jewish families and larger groups sing traditional Sabbath songs known as zemirot outside the context of formal religious services, they usually do so a cappella, and Bar and Bat Mitzvah celebrations on the Sabbath sometimes feature entertainment by a cappella ensembles. During the Three Weeks musical instruments are prohibited. Many Jews consider a portion of the 49-day period of the counting of the omer between Passover and Shavuot to be a time of semi-mourning and instrumental music is not allowed during that time. This has led to a tradition of a cappella singing sometimes known as "sefirah" music. The popularization of the Jewish chant may be found in the writings of the Jewish philosopher Philo, born 20 BC. Weaving together Jewish and Greek thought, Philo promoted praise without instruments, and taught that "silent singing" (without even vocal chords) was better still. This view parted with the Jewish scriptures, where Israel offered praise with instruments by God's own command The shofar is the only temple instrument still being used today in the synagogue, and it is only used from Rosh Chodesh Elul through the end of Yom Kippur. The shofar is used by itself, without any vocal accompaniment, and is limited to a very strictly defined set of sounds and specific places in the synagogue service. However, silver trumpets, as described in Numbers 10:1-18, have been made in recent years and used in prayer services at the Western Wall. In the United States. Peter Christian Lutkin, dean of the Northwestern University School of Music, helped popularize a cappella music in the United States by founding the Northwestern A Cappella Choir in 1906. The A Cappella Choir was "the first permanent organization of its kind in America." An a cappella tradition was begun in 1911 by F. Melius Christiansen, a music faculty member at St. Olaf College in Northfield, Minnesota. The St. Olaf College Choir was established as an outgrowth of the local St. John's Lutheran Church, where Christiansen was organist and the choir was composed, at least partially, of students from the nearby St. Olaf campus. The success of the ensemble was emulated by other regional conductors, and a tradition of a cappella choral music was born in the region at colleges like Concordia College (Moorhead, Minnesota), Augustana College (Rock Island, Illinois), Waldorf University (Forest City, Iowa), Luther College (Decorah, Iowa), Gustavus Adolphus College (St. Peter, Minnesota), Augustana College (Sioux Falls, South Dakota), and Augsburg University (Minneapolis, Minnesota). The choirs typically range from 40 to 80 singers and are recognized for their efforts to perfect blend, intonation, phrasing and pitch in a large choral setting. Movements in modern a cappella over the past century include barbershop and doo wop. The Barbershop Harmony Society, Sweet Adelines International, and Harmony Inc. host educational events including Harmony University, Directors University, and the International Educational Symposium, and international contests and conventions, recognizing international champion choruses and quartets. Many a cappella groups can be found in high schools and colleges. There are amateur Barbershop Harmony Society and professional groups that sing a cappella exclusively. Although a cappella is technically defined as singing without instrumental accompaniment, some groups use their voices to emulate instruments; others are more traditional and focus on harmonizing. A cappella styles range from gospel music to contemporary to barbershop quartets and choruses. The Contemporary A Cappella Society (CASA) is a membership option for former students, whose funds support hosted competitions and events. A cappella music was popularized between the late 2000s and the early to mid-2010s with media hits such as the 2009–2014 TV show "The Sing-Off" and the musical comedy film series "Pitch Perfect". Recording artists. In July 1943, as a result of the American Federation of Musicians boycott of US recording studios, the a cappella vocal group "The Song Spinners" had a best-seller with "Comin' In on a Wing and a Prayer". In the 1950s, several recording groups, notably The Hi-Los and the Four Freshmen, introduced complex jazz harmonies to a cappella performances. The King's Singers are credited with promoting interest in small-group a cappella performances in the 1960s. Frank Zappa loved doo wop and a cappella, so Zappa released The Persuasions' first album from his label in 1970. Judy Collins recorded "Amazing Grace" a cappella. In 1983, an a cappella group known as The Flying Pickets had a Christmas 'number one' in the UK with a cover of Yazoo's (known in the US as Yaz) "Only You". A cappella music attained renewed prominence from the late 1980s onward, spurred by the success of Top 40 recordings by artists such as The Manhattan Transfer, Bobby McFerrin, Huey Lewis and the News, All-4-One, The Nylons, Backstreet Boys, Boyz II Men, and *NSYNC. Contemporary a cappella includes many vocal groups and bands who add vocal percussion or beatboxing to create a pop/rock/gospel sound, in some cases very similar to bands with instruments. Examples of such professional groups include Straight No Chaser, Pentatonix, The House Jacks, Rockapella, Mosaic, Home Free and M-pact. There also remains a strong a cappella presence within Christian music, as some denominations purposefully do not use instruments during worship. Examples of such groups are Take 6, Glad and Acappella. Arrangements of popular music for small a cappella ensembles typically include one voice singing the lead melody, one singing a rhythmic bass line, and the remaining voices contributing chordal or polyphonic accompaniment. A cappella can also describe the isolated vocal track(s) from a multitrack recording that originally included instrumentation. These vocal tracks may be remixed or put onto vinyl records for DJs, or released to the public so that fans can remix them. One such example is the a cappella release of Jay-Z's "Black Album", which Danger Mouse mixed with The Beatles' "White Album" to create "The Grey Album". On their 1966 album titled "Album", Peter, Paul and Mary included the song "Norman Normal". All the sounds on that song, both vocals and instruments, were created by Paul's voice, with no actual instruments used. In 2013, an artist by the name Smooth McGroove rose to prominence with his style of a cappella music. He is best known for his a cappella covers of video game music tracks on YouTube. in 2015, an a cappella version of Jerusalem by multi-instrumentalist Jacob Collier was selected for Beats by Dre "The Game Starts Here" for the England Rugby World Cup campaign. Musical theatre. A cappella has been used as the sole orchestration for original works of musical theatre that have had commercial runs Off-Broadway (theatres in New York City with 99 to 500 seats) only four times. The first was Avenue X which opened on 28 January 1994 and ran for 77 performances. It was produced by Playwrights Horizons with book by John Jiler, music and lyrics by Ray Leslee. The musical style of the show's score was primarily Doo-Wop as the plot revolved around Doo-Wop group singers of the 1960s. In 2001, The Kinsey Sicks, produced and starred in the critically acclaimed off-Broadway hit, "DRAGAPELLA! Starring the Kinsey Sicks" at New York's legendary Studio 54. That production received a nomination for a Lucille Lortel award as Best Musical and a Drama Desk nomination for Best Lyrics. It was directed by Glenn Casale with original music and lyrics by Ben Schatz. The a cappella musical Perfect Harmony, a comedy about two high school a cappella groups vying to win the National championship, made its Off Broadway debut at Theatre Row's Acorn Theatre on 42nd Street in New York City in October 2010 after a successful out-of-town run at the Stoneham Theatre, in Stoneham, Massachusetts. Perfect Harmony features the hit music of The Jackson 5, Pat Benatar, Billy Idol, Marvin Gaye, Scandal, Tiffany, The Romantics, The Pretenders, The Temptations, The Contours, The Commodores, Tommy James & the Shondells and The Partridge Family, and has been compared to a cross between Altar Boyz and The 25th Annual Putnam County Spelling Bee. The fourth a cappella musical to appear Off-Broadway, In Transit, premiered 5 October 2010 and was produced by Primary Stages with book, music, and lyrics by Kristen Anderson-Lopez, James-Allen Ford, Russ Kaplan, and Sara Wordsworth. Set primarily in the New York City subway system its score features an eclectic mix of musical genres (including jazz, hip hop, Latin, rock, and country). In Transit incorporates vocal beat boxing into its contemporary a cappella arrangements through the use of a subway beat boxer character. Beat boxer and actor Chesney Snow performed this role for the 2010 Primary Stages production. According to the show's website, it is scheduled to reopen for an open-ended commercial run in the Fall of 2011. In 2011, the production received four Lucille Lortel Award nominations including Outstanding Musical, Outer Critics Circle and Drama League nominations, as well as five Drama Desk nominations including Outstanding Musical and won for Outstanding Ensemble Performance. In December 2016, In Transit became the first a cappella musical on Broadway. Barbershop style. Barbershop music is one of several uniquely American art forms. The earliest reports of this style of a cappella music involved African Americans. The earliest documented quartets all began in barber shops. In 1938, the first formal men's barbershop organization was formed, known as the Society for the Preservation and Encouragement of Barber Shop Quartet Singing in America (S.P.E.B.S.Q.S.A), and in 2004 rebranded itself and officially changed its public name to the Barbershop Harmony Society (BHS). Today the BHS has about 22,000 members in approximately 800 chapters across the United States and Canada, and the barbershop style has spread around the world with organizations in many other countries. The Barbershop Harmony Society provides a highly organized competition structure for a cappella quartets and choruses singing in the barbershop style. In 1945, the first formal women's barbershop organization, Sweet Adelines, was formed. In 1953, Sweet Adelines became an international organization, although it did not change its name to Sweet Adelines International until 1991. The membership of nearly 25,000 women, all singing in English, includes choruses in most of the fifty United States as well as in Australia, Canada, Finland, Germany, Ireland, Japan, New Zealand, Spain, Sweden, the United Kingdom, and the Netherlands. Headquartered in Tulsa, Oklahoma, the organization encompasses more than 1,200 registered quartets and 600 choruses. In 1959, a second women's barbershop organization started as a break off from Sweet Adelines due to ideological differences. Based on democratic principles which continue to this day, Harmony, Inc. is smaller than its counterpart, but has an atmosphere of friendship and competition. With about 2,500 members in the United States and Canada, Harmony, Inc. uses the same rules in contest that the Barbershop Harmony Society uses. Harmony, Inc. is registered in Providence, Rhode Island. Amateur and high school. The popularity of a cappella among high schools and amateurs was revived by television shows and movies such as "Glee" and "Pitch Perfect". High school groups may have conductors or student leaders who keep the tempo for the group, or beatboxers/vocal percussionists. Since 2013, summer training programs have appeared, such as A Cappella Academy in Los Angeles, California (founded by Ben Bram, Rob Dietz, and Avi Kaplan) and Camp A Cappella in Dayton, Ohio (founded by Deke Sharon and Brody McDonald). These programs teach about different aspects of a cappella music, including vocal performance, arranging, and beatboxing/vocal percussion. In other countries. Afghanistan. The Islamic Emirate of Afghanistan has no official anthem because of views by the Taliban of music as un-Islamic. However, the "de facto" national anthem of Afghanistan is an a cappella nasheed, as musical instruments are virtually banned as corrupting and un-Islamic. Iran. The first a cappella group in Iran is the Damour Vocal Band, which was able to perform on national television despite a ban on women singing. Pakistan. The musical show "Strepsils Stereo" is credited for introducing the art of a cappella in Pakistan. Sri Lanka. Composer Dinesh Subasinghe became the first Sri Lankan to write a cappella pieces for SATB choirs. He wrote "The Princes of the Lost Tribe" and "Ancient Queen of Somawathee" for Menaka De Sahabandu and Bridget Helpe's choirs, respectively, based on historical incidents in ancient Sri Lanka. Voice Print is also a professional a cappella music group in Sri Lanka. Sweden. The European a cappella tradition is especially strong in the countries around the Baltic and perhaps most so in Sweden as described by Richard Sparks in his doctoral thesis "The Swedish Choral Miracle" in 2000. Swedish a cappella choirs have over the last 25 years won around 25% of the annual prestigious European Grand Prix for Choral Singing (EGP) that despite its name is open to choirs from all over the world (see list of laureates in the Wikipedia article on the EGP competition). The reasons for the strong Swedish dominance are as explained by Richard Sparks manifold; suffice to say here that there is a long-standing tradition, an unusually large proportion of the populations (5% is often cited) regularly sing in choirs, the Swedish choral director Eric Ericson had an enormous impact on a cappella choral development not only in Sweden but around the world, and finally there are a large number of very popular primary and secondary schools ('music schools') with high admission standards based on auditions that combine a rigid academic regimen with high level choral singing on every school day, a system that started with Adolf Fredrik's Music School in Stockholm in 1939 but has spread over the country. United Kingdom. A cappella has gained attention in the UK in recent years, with many groups forming at British universities by students seeking an alternative singing pursuit to traditional choral and chapel singing. This movement has been bolstered by organisations such as The Voice Festival UK. Western collegiate. It is not clear exactly where collegiate a cappella began. The Rensselyrics of Rensselaer Polytechnic Institute (formerly known as the RPI Glee Club), established in 1873 is perhaps the oldest known collegiate a cappella group. The longest continuously singing group is probably The Whiffenpoofs of Yale University, which was formed in 1909 and once included Cole Porter as a member. Collegiate a cappella groups grew throughout the 20th century. Some notable historical groups formed along the way include Colgate University's The Colgate 13 (1942), Dartmouth College's Aires (1946), Cornell University's Cayuga's Waiters (1949) and The Hangovers (1968), the University of Maine Maine Steiners (1958), the Columbia University Kingsmen (1949), the Jabberwocks of Brown University (1949), and the University of Rochester YellowJackets (1956). All-women a cappella groups followed shortly, frequently as a parody of the men's groups: the Smiffenpoofs of Smith College (1936), the Night Owls of Vassar College (1942), The Shwiffs of Connecticut College (The She-Whiffenpoofs, 1944), and The Chattertocks of Brown University (1951). A cappella groups exploded in popularity beginning in the 1990s, fueled in part by a change in style popularized by the Tufts University Beelzebubs and the Boston University Dear Abbeys. The new style used voices to emulate modern rock instruments, including vocal percussion/"beatboxing". Some larger universities now have multiple groups. Groups often join one another in on-campus concerts, such as the Georgetown Chimes' Cherry Tree Massacre, a 3-weekend a cappella festival held each February since 1975, where over a hundred collegiate groups have appeared, as well as International Quartet Champions The Boston Common and the contemporary commercial a cappella group Rockapella. Co-ed groups have produced many up-and-coming and major artists, including John Legend, an alumnus of the Counterparts at the University of Pennsylvania, Sara Bareilles, an alumna of Awaken A Cappella at University of California, Los Angeles, and Mindy Kaling, an alumna of the Rockapellas at Dartmouth College. Mira Sorvino is an alumna of the Harvard-Radcliffe Veritones of Harvard College, where she had the solo on Only You by Yaz. Jewish-interest groups such as Queens College's Tizmoret, Tufts University's Shir Appeal, University of Chicago's Rhythm and Jews, Binghamton University's Kaskeset, Ohio State University's Meshuganotes, Rutgers University's Kol Halayla, New York University's Ani V'Ata, University of California, Los Angeles's Jewkbox, and Yale University's Magevet are also gaining popularity across the U.S. Increased interest in modern a cappella (particularly collegiate a cappella) can be seen in the growth of awards such as the Contemporary A Cappella Recording Awards (overseen by the Contemporary A Cappella Society) and competitions such as the International Championship of Collegiate A Cappella for college groups and the Harmony Sweepstakes for all groups. In December 2009, a new television competition series called "The Sing-Off" aired on NBC. The show featured eight a cappella groups from the United States and Puerto Rico vying for the prize of $100,000 and a recording contract with Epic Records/Sony Music. The show was judged by Ben Folds, Shawn Stockman, and Nicole Scherzinger and was won by an all-male group from Puerto Rico called Nota. The show returned for a second, third, fourth, and fifth season, won by Committed, Pentatonix, Home Free, and The Melodores from Vanderbilt University respectively. Each year, hundreds of Collegiate a cappella groups submit their strongest songs in a competition to be on The Best of College A Cappella (BOCA), an album compilation of tracks from the best college a cappella groups around the world. The album is produced by Varsity Vocals – which also produces the International Championship of Collegiate A Cappella – and Deke Sharon. ). According to ethnomusicologist Joshua S. Dunchan, "BOCA carries considerable cache and respect within the field despite the appearance of other compilations in part, perhaps, because of its longevity and the prestige of the individuals behind it." Collegiate a cappella groups may also submit their tracks to Voices Only, a two-disc series released at the beginning of each school year. A Voices Only album has been released every year since 2005. In addition, from 2014 to 2019, female-identifying a cappella groups had the opportunity to send their strongest song tracks to the Women's A Cappella Association (WACA) for its annual best of women's a cappella album. WACA offered another medium for women's voices to receive recognition and released an album every year from 2014 to 2019, featuring female-identifying groups from across the United States. The Women's A Cappella Association hosted seven annual festivals in California before ending operations in 2019. South Asian collegiate. South Asian a cappella features a mash-up of western and Indian/middle-eastern songs, which places it in the category of South Asian fusion music. A cappella is gaining popularity among South Asians with the emergence of primarily Hindi-English college groups. The first South Asian a cappella group was Penn Masala, founded in 1996 at the University of Pennsylvania. Co-ed South Asian a cappella groups are also gaining in popularity. The first co-ed South Asian a cappella was Anokha, from the University of Maryland, formed in 2001. Also, Dil se, another co-ed a cappella from UC Berkeley, hosts the "Anahat" competition at the University of California, Berkeley annually. Maize Mirchi, the co-ed a cappella group from the University of Michigan hosts "Sa Re Ga Ma Pella", an annual South Asian a cappella invitational with various groups from the Midwest. More South Asian a cappella groups from the Midwest are Chai Town from the University of Illinois Urbana-Champaign and Dhamakapella from Case Western Reserve University. Emulating instruments. In addition to singing words, some a cappella singers also emulate instrumentation by reproducing instrumental sounds with their vocal cords and mouth, often pitched using specialised pitch pipes. One of the earliest 20th century practitioners of this method were The Mills Brothers whose early recordings of the 1930s clearly stated on the label that all instrumentation was done vocally. More recently, "Twilight Zone" by 2 Unlimited was sung a cappella to the instrumentation on the comedy television series "Tompkins Square". Another famous example of emulating instrumentation instead of singing the words is the theme song for "The New Addams Family" series on Fox Family Channel (now Freeform). Groups such as Vocal Sampling and Undivided emulate Latin rhythms a cappella. In the 1960s, the Swingle Singers used their voices to emulate musical instruments to Baroque and Classical music. Vocal artist Bobby McFerrin is famous for his instrumental emulation. A cappella group Naturally Seven recreates entire songs using vocal tones for every instrument. The Swingle Singers used ad libs to sound like instruments, but have been known to produce non-verbal versions of musical instruments. Beatboxing, more accurately known as vocal percussion, is a technique used in a cappella music popularized by the hip-hop community, where rap is often performed a cappella. The advent of vocal percussion added new dimensions to the a cappella genre and has become very prevalent in modern arrangements. Beatboxing is performed often by shaping the mouth, making pops and clicks as pseudo-drum sounds. A popular phrase that beat boxers use to begin their training is the phrase "boots and cats". As the beat boxer progresses in their training, they remove the vowels and continue on from there, emulating a "bts n cts n" sound, a solid base for beginner beat boxers. The phrase has become popular enough to where Siri recites "Boots and Cats" when you ask it to beatbox. Jazz vocalist Petra Haden used a four-track recorder to produce an a cappella version of "The Who Sell Out" including the instruments and fake advertisements on her album "" in 2005. Haden has also released a cappella versions of Journey's "Don't Stop Believin'", The Beach Boys' "God Only Knows" and Michael Jackson's "Thriller".
2414
Arrangement
In music, an arrangement is a musical adaptation of an existing composition. Differences from the original composition may include reharmonization, melodic paraphrasing, orchestration, or formal development. Arranging differs from orchestration in that the latter process is limited to the assignment of notes to instruments for performance by an orchestra, concert band, or other musical ensemble. Arranging "involves adding compositional techniques, such as new thematic material for introductions, transitions, or modulations, and endings. Arranging is the art of giving an existing melody musical variety". In jazz, a memorized (unwritten) arrangement of a new or pre-existing composition is known as a "head arrangement". Classical music. Arrangement and transcriptions of classical and serious music go back to the early history of this genre. Eighteenth century. J.S. Bach frequently made arrangements of his own and other composers' pieces. One example is the arrangement that he made of the Prelude from his Partita No. 3 for solo violin, BWV 1006. Bach transformed this solo piece into an orchestral Sinfonia that introduces his Cantata BWV29. "The initial violin composition was in E major but both arranged versions are transposed down to D, the better to accommodate the wind instruments". "The transformation of material conceived for a single string instrument into a fully orchestrated concerto-type movement is so successful that it is unlikely that anyone hearing the latter for the first time would suspect the existence of the former". Nineteenth and twentieth centuries. Piano music. In particular, music written for the piano has frequently undergone this treatment, as it has been arranged for orchestra, chamber ensemble or concert band. Beethoven made an arrangement of his Piano Sonata No.9 for string quartet. Conversely, Beethoven also arranged his Grosse Fuge (a movement from one of his late string quartets) for piano duet. Due to his lack of expertise in orchestration, the American composer George Gershwin had his "Rhapsody in Blue" arranged and orchestrated by Ferde Grofé. Erik Satie wrote his three "Gymnopédies" for solo piano in 1888. Eight years later, Debussy arranged two of them, exploiting the range of instrumental timbres available in a late 19th century orchestra. "It was Debussy whose 1896 orchestrations of the Gymnopédies put their composer on the map." "Pictures at an Exhibition", a suite of ten piano pieces by Modest Mussorgsky, has been arranged over twenty times, notably by Maurice Ravel. Ravel's arrangement demonstrates an "ability to create unexpected, memorable orchestral sonorities". In the second movement, "Gnomus", Mussorgsky's original piano piece simply repeats the following passage: Ravel initially orchestrates it as follows: Repeating the passage, Ravel provides a fresh orchestration "this time with the celesta (replacing the woodwinds) accompanied by string glissandos on the fingerboard". Songs. A number of Franz Schubert's songs, originally for voice with piano accompaniment, were arranged by other composers. For example, Schubert's "highly charged, graphic" song "Erlkönig" (the Erl King) has a piano introduction that conveys "unflagging energy" from the start: The arrangement of this song by Hector Berlioz uses strings to convey faithfully the driving urgency and threatening atmosphere of the original. Berlioz adds colour in bars 6-8 through the addition of woodwind, horns, and a timpani. With typical flamboyance, Berlioz adds spice to the harmony in bar 6 with an E flat in the horn part, creating a half-diminished seventh chord which is not in Schubert's original piano part. There are subtle differences between this and the arrangement of the song by Franz Liszt. The upper string sound is thicker, with violins and violas playing the fierce repeated octaves in unison and bassoons compensating for this by doubling the cellos and basses. There are no timpani, but trumpets and horns add a small jolt to the rhythm of the opening bar, reinforcing the bare octaves of the strings by playing on the second main beat. Unlike Berlioz, Liszt does not alter the harmony, but changes the emphasis somewhat in bar 6, with the note A in the oboes and clarinets grating against rather than blending with the G in the strings. "Schubert has come in for his fair share of transcriptions and arrangements. Most, like Liszt's transcriptions of the Lieder or Berlioz’s orchestration for "Erlkönig", tell us more about the arranger that about the original composer, but they can be diverting so long as they are in no way a replacement for the original". Gustav Mahler's "Lieder eines fahrenden Gesellen" (Songs of a Wayfarer) were originally written for voice with piano accompaniment. The composer's later arrangement of the piano part shows a typical ear for clarity and transparency in re-writing for an ensemble. Here is the original piano version of the closing bars of the second song, "Gieng heit' Morgen über's Feld": The orchestration shows Mahler's attention to detail in bringing out differentiated orchestral colours supplied by woodwind, strings and horn. Mahler uses a harp to convey the original arpeggios supplied by the left hand of the piano part. Mahler also extracts a descending chromatic melodic line, implied by the left hand in bars 2-4 (above) and gives it to the horn. Popular music. Popular music recordings often include parts for brass horn sections, bowed strings, and other instruments that were added by arrangers and not composed by the original songwriters. Some pop arrangers even add sections using full orchestra, though this is less common due to the expense. Popular music arrangements may also be considered to include new releases of existing songs with a new musical treatment. These changes can include alterations to tempo, meter, key, instrumentation, and other musical elements. Well-known examples include Joe Cocker's version of the Beatles' "With a Little Help from My Friends," Cream's "Crossroads", and Ike and Tina Turner's version of Creedence Clearwater Revival's "Proud Mary". The American group Vanilla Fudge and British group Yes based their early careers on radical re-arrangements of contemporary hits. Bonnie Pointer performed disco and Motown-themed versions of "Heaven Must Have Sent You." Remixes, such as in dance music, can also be considered arrangements. Jazz. Arrangements for small jazz combos are usually informal, minimal, and uncredited. Larger ensembles have generally had greater requirements for notated arrangements, though the early Count Basie big band is known for its many "head" arrangements, so called because they were worked out by the players themselves, memorized ("in the player's "head""), and never written down. Most arrangements for big bands, however, were written down and credited to a specific arranger, as with arrangements by Sammy Nestico and Neal Hefti for Count Basie's later big bands. Don Redman made innovations in jazz arranging as a part of Fletcher Henderson's orchestra in the 1920s. Redman's arrangements introduced a more intricate melodic presentation and "soli" performances for various sections of the big band. Benny Carter became Henderson's primary arranger in the early 1930s, becoming known for his arranging abilities in addition to his previous recognition as a performer. Beginning in 1938, Billy Strayhorn became an arranger of great renown for the Duke Ellington orchestra. Jelly Roll Morton is sometimes considered the earliest jazz arranger. While he toured around the years 1912 to 1915, he wrote down parts to enable "pickup bands" to perform his compositions. Big-band arrangements are informally called "charts". In the swing era they were usually either arrangements of popular songs or they were entirely new compositions. Duke Ellington's and Billy Strayhorn's arrangements for the Duke Ellington big band were usually new compositions, and some of Eddie Sauter's arrangements for the Benny Goodman band and Artie Shaw's arrangements for his own band were new compositions as well. It became more common to arrange sketchy jazz combo compositions for big band after the bop era. After 1950, the big bands declined in number. However, several bands continued and arrangers provided renowned arrangements. Gil Evans wrote a number of large-ensemble arrangements in the late 1950s and early 1960s intended for recording sessions only. Other arrangers of note include Vic Schoen, Pete Rugolo, Oliver Nelson, Johnny Richards, Billy May, Thad Jones, Maria Schneider, Bob Brookmeyer, Lou Marini, Nelson Riddle, Ralph Burns, Billy Byers, Gordon Jenkins, Ray Conniff, Henry Mancini, Ray Reach, Vince Mendoza, and Claus Ogerman. In the 21st century, the big-band arrangement has made a modest comeback. Gordon Goodwin, Roy Hargrove, and Christian McBride have all rolled out new big bands with both original compositions and new arrangements of standard tunes. For instrumental groups. Strings. The string section is a body of instruments composed of various bowed stringed instruments. By the 19th century orchestral music in Europe had standardized the string section into the following homogeneous instrumental groups: first violins, second violins (the same instrument as the first violins, but typically playing an accompaniment or harmony part to the first violins, and often at a lower pitch range), violas, cellos, and double basses. The string section in a multi-sectioned orchestra is sometimes referred to as the "string choir." The harp is also a stringed instrument, but is not a member of nor homogeneous with the violin family and is not considered part of the string choir. Samuel Adler classifies the harp as a plucked string instrument in the same category as the guitar (acoustic or electric), mandolin, banjo, or zither. Like the harp these instruments do not belong to the violin family and are not homogeneous with the string choir. In modern arranging these instruments are considered part of the rhythm section. The electric bass and upright string bass—depending on the circumstance—can be treated by the arranger as either string section or rhythm section instruments. A group of instruments in which each member plays a unique part—rather than playing in unison with other like instruments—is referred to as a chamber ensemble. A chamber ensemble made up entirely of strings of the violin family is referred to by its size. A string trio consists of three players, a string quartet four, a string quintet five, and so on. In most circumstances the string section is treated by the arranger as one homogeneous unit and its members are required to play preconceived material rather than improvise. A string section can be utilized on its own (this is referred to as a string orchestra) or in conjunction with any of the other instrumental sections. More than one string orchestra can be utilized. A standard string section (vln., vln 2., vla., vcl, cb.) with each section playing unison allows the arranger to create a five-part texture. Often an arranger will divide each violin section in half or thirds to achieve a denser texture. It is possible to carry this division to its logical extreme in which each member of the string section plays his or her own unique part. Size of the string section. Artistic, budgetary and logistical concerns, including the size of the orchestra pit or hall will determine the size and instrumentation of a string section. The Broadway musical "West Side Story", in 1957, was booked into the Winter Garden theater; composer Leonard Bernstein disliked the playing of "house" viola players he would have to use there, and so he chose to leave them out of the show's instrumentation; a benefit was the creation of more space in the pit for an expanded percussion section. George Martin, producer and arranger for The Beatles, warns arrangers about the intonation problems when only two like instruments play in unison: "After a string quartet, I do not think there is a satisfactory sound for strings until one has at least three players on each line . . . as a rule two stringed instruments together create a slight 'beat' which does not give a smooth sound." Different music directors may use different numbers of string players and different balances between the sections to create different musical effects. While any combination and number of string instruments is possible in a section, a traditional string section sound is achieved with a violin-heavy balance of instruments.
2416
Athanasian Creed
The Athanasian Creed — also called the Pseudo-Athanasian Creed or Quicunque Vult (or Quicumque Vult), which is both its Latin name and its opening words, meaning "Whosoever wishes" — is a Christian statement of belief focused on Trinitarian doctrine and Christology. Used by Christian churches since the early sixth century, it was the first creed to explicitly state the equality of the three hypostases of the Trinity. It differs from the Nicene-Constantinopolitan Creed and the Apostles' Creed in that it includes anathemas condemning those who disagree with its statements (as does the original Nicene Creed). Widely accepted in Western Christianity, including by the Roman Catholic Church, Lutheran Churches (it is part of the Lutheran confessions set out in the "Book of Concord"), Anglican Churches, Reformed Churches, and ancient liturgical churches, the Athanasian Creed has been used in public worship less frequently, with exception of Trinity Sunday. However, part of it can be found as an "Authorized Affirmation of Faith" in the main volume of the "Common Worship" liturgy of the Church of England published in 2000. Designed to distinguish Nicene Christianity from Arianism, the Athanasian Creed traditionally was recited at the Sunday Office of Prime in the Western Church. It has not been commonly used in the Eastern Church. Origin. There is a possible allusion to the Creed in Gregory Nazianzen's Oration in praise of Athanasius: "For, when all the rest who sympathised with us were divided into three parties, and many were faltering in their conception of the Son, and still more in that of the Holy Ghost, (a point on which to be only slightly in error was to be orthodox) and few indeed were sound upon both points, he was the first and only one, or with the concurrence of but a few, to venture to confess in writing, with entire clearness and distinctness, the Unity of Godhead and Essence of the Three Persons, and thus to attain in later days, under the influence of inspiration, to the same faith in regard to the Holy Ghost, as had been bestowed at an earlier time on most of the Fathers in regard to the Son. This confession, a truly royal and magnificent gift, he presented to the Emperor, opposing to the unwritten innovation, a written account the orthodox faith, so that an emperor might be overcome by an emperor, reason by reason, treatise by treatise." (Oration 21, p. 33) A medieval account credited Athanasius of Alexandria, the famous defender of Nicene theology, as the author of the Creed. According to that account, Athanasius composed it during his exile in Rome and presented it to Pope Julius I as a witness to his orthodoxy. The traditional attribution of the Creed to Athanasius was first called into question in 1642 by the Dutch Protestant theologian Gerhard Johann Vossius. It has since been widely accepted by modern scholars that the creed was not authored by Athanasius, that it was not originally called a creed at all and that Athanasius's name was not originally attached to it. Athanasius's name seems to have become attached to the creed as a sign of its strong declaration of Trinitarian faith. The reasoning for rejecting Athanasius as the author usually relies on a combination of the following: The use of the creed in a sermon by Caesarius of Arles, as well as a theological resemblance to works by Vincent of Lérins, point to Southern Gaul as its origin. The most likely time frame is in the late fifth or early sixth century AD, at least 100 years after Athanasius lived. The Christian theology of the creed is firmly rooted in the Augustinian tradition and uses the exact terminology of Augustine's "On the Trinity" (published 415 AD). In the late 19th century, there was a great deal of speculation about who might have authored the creed, with suggestions including Ambrose of Milan, Venantius Fortunatus and Hilary of Poitiers. The 1940 discovery of a lost work by Vincent of Lérins, which bears a striking similarity to much of the language of the Athanasian Creed, has led many to conclude that the creed originated with Vincent or his students. For example, in the authoritative modern monograph about the creed, J. N. D. Kelly asserts that Vincent of Lérins was not its author but that it may have come from the same milieu, the area of Lérins in southern Gaul. The oldest surviving manuscripts of the Athanasian Creed date from the late 8th century. Content. The Athanasian Creed is usually divided into two sections: lines 1–28 address the doctrine of the Trinity, and lines 29–44 address the doctrine of Christology. Enumerating the three persons of the Trinity (Father, the Son, and the Holy Spirit), the first section of the creed ascribes the divine attributes to each individually. Thus, each person of the Trinity is described as uncreated ("increatus"), limitless ("Immensus"), eternal ("æternus"), and omnipotent ("omnipotens"). While ascribing the divine attributes and divinity to each person of the Trinity, thus avoiding subordinationism, the first half of the Athanasian Creed also stresses the unity of the three persons in the one Godhead, thus avoiding a theology of tritheism. The text of the Athanasian Creed is as follows: The Christology of the second section is more detailed than that of the Nicene Creed and reflects the teaching of the First Council of Ephesus (431) and the definition of the Council of Chalcedon (451). The Athanasian Creed uses the term "substantia" (a Latin translation of the Nicene "homoousios": 'same being' or 'consubstantial') with respect to the relation of the Son to the Father according to his divine nature, but it also says that the Son is "substantia" of his mother Mary according to his human nature. The Creed's wording thus excludes not only Sabellianism and Arianism but also the Christological heresies of Nestorianism and Eutychianism. A need for a clear confession against Arianism arose in Western Europe when the Ostrogoths and Visigoths, who had Arian beliefs, invaded at the beginning of the 5th century. The final section of this Creed also moved beyond the Nicene (and Apostles') Creeds in making negative statements about the people's fate: "They that have done good shall go into life everlasting: and they that have done evil into everlasting fire." That caused considerable debate in England in the mid-19th century, centred on the teaching of Frederick Denison Maurice. Uses. Composed of 44 rhythmic lines, the Athanasian Creed appears to have been intended as a liturgical document, the original purpose of the creed being for it to be spoken or sung as a part of worship. The creed itself uses the language of public worship by speaking of the worship of God rather than the language of belief ("Now this is the catholic faith: We worship one God"). In the mediaeval Catholic Church, the creed was recited following the Sunday sermon or at the Sunday Office of Prime. The creed was often set to music and used in the place of a Psalm. Protestantism. Early Protestants inherited the late medieval devotion to the Athanasian Creed, and it was considered to be authoritative in many Protestant churches. The statements of Protestant belief (confessional documents) of various Reformers commend the Athanasian Creed to their followers, including the Augsburg Confession, the Formula of Concord, the Second Helvetic Confession, the Belgic Confession, the Bohemian Confession and the Thirty-nine Articles. A metric version, "Quicumque vult", with a musical setting, was published in "The Whole Booke of Psalmes" printed by John Day in 1562. Among modern Lutheran and Reformed churches adherence to the Athanasian Creed is prescribed by the earlier confessional documents, but the creed does not receive much attention outside occasional use, especially on Trinity Sunday. In Reformed circles, it is included, for example, in the Christian Reformed Churches of Australia's Book of Forms (published in 1991). It is sometimes recited in liturgies of the Canadian Reformed Churches and in the Protestant Reformed Churches. The Four additional ancient creeds that they adhere to would be Apostles, Athanasian, Creed of Chalcedon, and Nicene Creed. In the successive Books of Common Prayer of the reformed Church of England, from 1549 to 1662, its recitation was provided for on 19 occasions each year, a practice that continued until the 19th century, when vigorous controversy regarding its statement about 'eternal damnation' saw its use gradually decline. It remains one of the three Creeds approved in the Thirty-Nine Articles, and it is printed in several current Anglican prayer books, such as "A Prayer Book for Australia" (1995). As with Roman Catholic practice, its use is now generally only on Trinity Sunday or its octave. An Anglican devotional manual published by The Church Union, "A Manual of Catholic Devotion: For Members of the Church of England", includes the Athanasian Creed with the prayers for Mattins, with the note: "Said on certain feasts at Mattins instead of the Apostles' Creed". The Episcopal Church, based in the United States, has never provided for its use in worship, but added it to its Book of Common Prayer for the first time in 1979, where it is included in small print in a reference section, "Historical Documents of the Church". The Anglo-Catholic devotional manual Saint Augustine's Prayer Book, first published in 1947 and revised in 1967, includes the Athanasian Creed under "Devotions to the Holy Trinity". Lutheranism. In Lutheranism, the Athanasian Creed is, along with the Apostles' and the Nicene Creed, one of the three ecumenical creeds and is placed at the beginning of the 1580 Book of Concord, the historic collection of authoritative doctrinal statements (confessions) of the Lutheran Church. It is still used in the liturgy on Trinity Sunday. Catholicism. In Roman Catholic churches, it was traditionally said at Prime on Sundays when the Office was of the Sunday. The 1911 reforms reduced that to Sundays after Epiphany and Pentecost and on Trinity Sunday, except when a commemoration of a double feast or a day within an Octave occurred. The 1960 reforms further reduced its use to once a year, on Trinity Sunday. It has been effectively dropped from the Catholic liturgy since the Second Vatican Council. It is, however, maintained in the rite of exorcism of the Roman Rite. Opus Dei members recite it on the third Sunday of every month. A common visualization of the first half of the Creed is the Shield of the Trinity.
2417
Alicante
Alicante () is a city and municipality in the Valencian Community, Spain. It is the capital of the province of Alicante and a historic Mediterranean port. The population of the city was 337,482 , the second-largest in the Valencian Community. Toponymy. The name of the city echoes the Arabic name "Laqant" () or "al-Laqant" (), which in turn reflects the Latin "Lucentum" and Greek root "Leuké" (or "Leuka"), meaning "white". History. The area around Alicante has been inhabited for over 7000 years. The first tribes of hunter-gatherers moved down gradually from Central Europe between 5000 and 3000 BC. Some of the earliest settlements were made on the slopes of Mount Benacantil. By 1000 BC Greek and Phoenician traders had begun to visit the eastern coast of Spain, establishing small trading ports and introducing the native Iberian tribes to the alphabet, iron, and the pottery wheel. The Carthaginian general Hamilcar Barca established the fortified settlement of "Akra Leuké" (Greek: , meaning "White Mountain" or "White Point"), in the mid-230s BC, which is generally presumed to have been on the site of modern Alicante. Although the Carthaginians conquered much of the land around Alicante, the Romans would eventually rule Hispania Tarraconensis for over 700 years. By the 5th century AD, Rome was in decline and the Roman predecessor town of Alicante, known as "Lucentum" (Latin), was more or less under the control of the Visigothic warlord Theudimer and thereafter under Visigothic rule from 400 to 700 A.D. The Goths did not put up much resistance to the Arab conquest of "Medina Laqant" at the beginning of the 8th century. The Moors ruled southern and eastern Spain until the 13th century "Reconquista" (Reconquest). Alicante was conquered again in 1247 by the Castilian king Alfonso X, but later passed to the Kingdom of Valencia in 1296 with King James II of Aragon. It gained the status of Royal Village ("Vila Reial") with representation in the medieval Valencian Parliament ("Corts Valencianes"). After several decades of being the battlefield where the Crown of Castile and the Crown of Aragon clashed, Alicante became a major Mediterranean trading station exporting rice, wine, olive oil, oranges, and wool. But between 1609 and 1614 King Felipe III expelled thousands of Moriscos who had remained in Valencia after the Reconquista, due to their cooperation with Barbary pirates who continually attacked coastal cities and caused much harm to trade. This act cost the region dearly; with so many skilled artisans and agricultural labourers gone, the feudal nobility found itself sliding into bankruptcy. Conditions worsened in the early 18th century; after the War of Spanish Succession, Alicante went into a long, slow decline, surviving through the 18th and 19th centuries by making shoes and growing agricultural produce such as oranges and almonds, and thanks to its fisheries. The end of the 19th century witnessed a sharp recovery of the local economy with increasing international trade and the growth of the city harbour leading to increased exports of several products (particularly during World War I when Spain was a neutral country). During the early 20th century, Alicante was a minor capital that took profit from the benefit of Spain's neutrality during World War I, and that provided new opportunities for local industry and agriculture. The Rif War in the 1920s saw numerous "alicantinos" drafted to fight in the long and bloody campaigns in the former Spanish protectorate (northern Morocco) against the Rif rebels. The political unrest of the late 1920s led to the victory of Republican candidates in local council elections throughout the country, and the abdication of King Alfonso XIII. The proclamation of the Second Spanish Republic was much celebrated in the city on 14 April 1931. The Spanish Civil War broke out on 17 July 1936. Alicante was the last city loyal to the Republican government to be occupied by General Franco's troops on 1 April 1939, and its harbour saw the last Republican government officials fleeing the country. Vicious air bombings were targeted on Alicante during the three years of civil conflict, most notably the bombing by the Italian "Aviazione Legionaria" of the Mercado on 25 May 1938 in which more than 300 civilians perished. The port of Alicante was the site of the heroic episode of the British ship "SS Stanbrook" in 1939 at the end of the Spanish Civil War. His captain Archibald Dickson decided to rescue thousands of Spanish Republicans families during the night of 28th March 1939 under the bombing of the Nazis. From 1954 onwards many "pied-noirs" settled in the city (as many as 30,000, although other sources decrease the amount tenfold). Alicante had fostered strong links with Oran in the past, and a notable share of the population of the latter city during the French colonial period had ancestry in the province of Alicante. The immigration process accelerated after the independence of Algeria in 1962. The late 1950s and early 1960s saw the onset of a lasting transformation of the city by the tourist industry. Large buildings and complexes rose in nearby Albufereta, e.g. El Barco, and Playa de San Juan de Alicante, with the benign climate being the biggest draw to attract prospective buyers and tourists who kept the hotels reasonably busy. New construction benefited the whole economy, as the development of the tourism sector also spawned new businesses such as restaurants, bars, and other tourist-oriented enterprises. Also, the old airfield at Rabasa was closed and air traffic moved to the new El Altet Airport, which made a more convenient and modern facility for charter flights bringing tourists from northern European countries. When Franco died in 1975, his successor Juan Carlos I played his part as the living symbol of the transition of Spain to a democratic constitutional monarchy. The governments of regional communities were given constitutional status as "nationalities", and their governments were given more autonomy, including that of the Valencian region, the "Generalitat Valenciana". The Port of Alicante has been reinventing itself since the industrial decline the city suffered in the 1980s (with most mercantile traffic lost to Valencia's harbour). In recent years, the Port Authority has established it as one of the most important ports in Spain for cruises, with 72 calls to port made by cruise ships in 2007 bringing some 80,000 passengers and 30,000 crew to the city each year. The moves to develop the port for more tourism have been welcomed by the city and its residents, but the latest plans to develop an industrial estate in the port have caused great controversy. Geography. Alicante is located in the southeast of the Iberian Peninsula, on the shores of the Mediterranean Sea. Some orographic features rise over the largely flat terrain where the city is built on including the Cabo de la Huerta, the Serra Grossa, the Tosal and the Benacantil hills. Located in an arid territory, Alicante lacks any meaningful permanent water stream. There are however several stream beds correspondent to intermittent "ramblas". There was a swamp area in the northeast of the municipality, "l'Albufereta", yet it was dried up in 1928. The municipality has two exclaves in the mainland: Monnegre (between the municipalities of San Vicente del Raspeig, Mutxamel, Busot and Jijona), and Cabeçó d'Or; the latter comprises part of the namesake Cabeçó d'Or mountain (including the summit, 1209 metres above sea level). The small island of Tabarca, 8 nautical miles to the south of the city, also belongs to the municipality. The foot of the main staircase of the City Hall Building ("Ayuntamiento") is the zero point ("cota cero"), used as the point of reference for measuring the height above or below sea level of any point in Spain, due to the marginal tidal variations of the Mediterranean sea at Alicante. Economy. Until the global recession which started in 2008, Alicante was one of the fastest-growing cities in Spain. The boom depended partly on tourism directed to the beaches of the Costa Blanca and particularly on the second residence-construction boom which started in the 1960s and revived again by the late 1990s. Services and public administration also play a major role in the city's economy. The construction boom has raised many environmental concerns and both the local autonomous government and city council are under scrutiny by the European Union. The construction surge was the subject of hot debates among politicians and citizens alike. The latest of many public battles concerns the plans of the Port Authority of Alicante to construct an industrial estate on reclaimed land in front of the city's coastal strip, in breach of local, national, and European regulations. (See Port of Alicante for details). The city serves as the headquarters of the European Union Intellectual Property Office and a sizeable population of European public workers live there. The campus of the University of Alicante lies in San Vicente del Raspeig, bordering the city of Alicante to the north. More than 25,000 students attend the university. Between 2005 and 2012 Ciudad de la Luz ("Ciutat de la Llum"), one of the largest film studios in Europe, had its base in Alicante. The studio shot Spanish and international movies such as "Asterix at the Olympic Games" by Frédéric Forestier and Thomas Langmann, and "Manolete" by Menno Meyjes. It was shut down in 2012 for violating European competition law. Government and administration. Luis Barcala of the People's Party has been the mayor of Alicante since 19 April 2018. He became mayor after the resignation of Gabriel Echávarri, when the councillor Nerea Belmonte defected from Guanyar Alacant and refused to support the Socialist Party replacement candidate Eva Montesinos. Gabriel Echávarri of the Socialist Party (PSOE) was the mayor of the city from 13 June 2015 until April 2018, following the municipal elections on 24 May 2015. He was supported by the votes from his group (6), plus those from leftist parties Guanyar Alacant (6) and Compromís (3), as well as from the centre-right party Ciudadanos (6). The People's Party ("Partido Popular", PP), with only 8 elected seats, lost the majority. On April he resigned due to various judicial issues and was temporarily substituted by the councillor Eva Montesinos. In the previous municipal elections of May 2011, Sonia Castedo of People's Party won the elections with an absolute majority, but resigned in December 2014 due to her involvement in several corruption scandals, at present being under investigation. Her fellow party member Miguel Valor went on to become mayor up until Echávarri's election. Climate. Alicante has mild winter temperatures, hot summers, and little rain, concentrated in equinoctial periods. Like the rest of the Province of Alicante itself, which has a range of dry climate types, the city has a hot semi-arid climate ("BSh") according to the Köppen climate classification. Daily variations in temperature are generally small because of the stabilising influence of the sea, although occasional periods of westerly wind can produce temperature changes of or more. Seasonal temperature variations are also relatively small, meaning that winters are mild and summers are hot. The average rainfall is per year. The cold drop means that September and October are the wettest months. Rarely, the rainfall can be torrential, reaching over in a 24-hour period, leading to severe flooding. Because of this irregularity, only 35 rainy days are observed on average per year, and the annual number of sunshine hours is 2,851. The record maximum temperature of was observed on 13 August 2022. The record minimum temperature of was recorded on 12 February 1956. The worst flooding in the city’s modern history occurred on 30 September 1997 when of rain fell within six hours. Temperatures below are very rare; the last recorded snowfall occurred in 1926. Alicante enjoys one of the sunniest and warmest winter daytime temperatures in mainland Europe. Demographics. The official population of Alicante in 2022 was 338,577 inhabitants and 768,194 in the metropolitan area "Alicante-Elche". As of 2022, about 17.7% of the population is foreign, 62195 people, most of them immigrants who have arrived in the previous 20 years. Besides which, there is an estimation of additional thousands coming from countries outside the EU (mostly from the African continent) that are under illegal alien status and therefore are not accounted for in official population figures. The real percentage of foreign residents is higher, since the Alicante metropolitan area is home to many Northern European retirees who are officially still residents of their own countries. A sizable number of semi-permanent residents are Spanish nationals who officially still live in other areas of Spain. Transportation. Alicante Airport outranks the Valencia Airport, being the busiest airport in the Valencian Community, and among the busiest airports in Spain after Madrid, Barcelona, Palma de Mallorca and Málaga. It is connected with Madrid and Barcelona by frequent Iberia and Vueling flights, and with many Western European cities through carriers such as Ryanair, Easyjet and Jet2.com. There are also regular flights to Algeria. Alicante railway station is used by Cercanías Murcia/Alicante commuter rail services linking Alicante with suburbs and Murcia. Long-range Renfe trains run frequently to Madrid, Barcelona, and Valencia. In 2013, the Madrid–Levante high-speed rail network was extended to Alicante station, allowing AVE high-speed rail services to link to Madrid via Villena AV, Albacete-Los Llanos and Cuenca-Fernando Zóbel. Alicante Tram connects the city with outlying settlements along Costa Blanca. , electric tram-trains run up to Benidorm, and diesel trains go further to Dénia. The city has regular ferry services to the Balearic Islands and Algeria. The city is strongly fortified, with a spacious harbour. Main sights. Amongst the most notable features of the city are the Castle of Santa Bárbara and the port of Alicante. The latter was the subject of bitter controversy in 2006–2007 as residents battled, successfully, to keep it from being changed into an industrial estate. The Santa Bárbara castle is situated on Mount Benacantil, overlooking the city. The tower ("La Torreta") at the top, is the oldest part of the castle, while part of the lowest zone and the walls were constructed later in the 18th century. The promenade "Explanada de España", lined by palm trees, is paved with 6.5 million marble floor tiles creating a wavy form, and is one of the most lovely promenades in Spain. The Promenade extends from the Port of Alicante to the Gran Vía and ends at the famous statue of Mark Hersch. For the people of Alicante, the promenade is the meeting place for the traditional Spanish "paseo", or stroll along the waterfront in the evenings, and a venue for outdoor musical concerts. At the end of the promenade is a monument by the artist Bañuls of the 19th century. "Barrio de la Santa Cruz" is a colourful quarter of the old city, situated southwest of Santa Bárbara castle. Its small houses climb up the hill leading to the walls and the castle, through narrow streets decorated with flags and tubs of flowers. "L'Ereta Park" is situated on the foothills of Mount Benacantil. It runs from the Santa Bárbara castle down to the old part of Alicante and consists of several levels, routes, decks, and rest stops which offer a panoramic view overlooking the city. "El Palmeral Park" is one of the favourite parks of Alicante's citizens. It includes walking trails, children's playgrounds, ponds and brooks, picnic tables, and an auditorium for concerts. Just a few kilometers from Alicante on the Mediterranean Sea lies Tabarca island. What was once a haven for Barbary pirates is now a beautiful tourist attraction. Other sights include: There are a dozen museums in Alicante. On exhibition at the Archaeological Museum of Alicante (MARQ) are local artifacts dating from 100,000 years ago until the early 20th century. The collection is divided into different rooms representing three divisions of archaeological methodology: ground, urban and underwater archaeology, with dioramas, audiovisual and interactive zones. The archaeological museum won the European Museum of the Year Award in 2004. Gravina Museum of Fine Arts presents several paintings and sculptures from the 16th century to the 19th century. Asegurada Museum of Contemporary Art houses a major collection of twentieth-century art, composed mainly of works donated by Eusebio Sempere. Festivals. The most important festival, the "Bonfires of Saint John" ("Hogueras de San Juan" / "Fogueres de Sant Joan"), takes place during the summer solstice. This is followed a week later by five nights of firework and pyrotechnic contests between companies on the urban beach "Playa del Postiguet". Another well-known festival is "Moors and Christians" ("Moros y Cristianos") in Altozano or "San Blas" district. Overall, the city boasts a year-round nightlife for the enjoyment of tourists, residents, and a large student population of the University of Alicante. The nightlife social scene tends to shift to nearby Playa de San Juan during the summer months. Every summer in Alicante, a two-month-long programme of music, theatre and dance is staged in the Paseo del Puerto. Sport. Alicante has two football clubs; Hércules CF and CF Intercity. Hércules currently compete in the Second Division B - Group 3, and are well known as they played in La Liga (the Spanish Premier Division) during the 1996/1997 season and again in 2010/2011. They have had many famous players such as David Trezeguet, Royston Drenthe and Nelson Valdez. Hércules are also known for their victory over Barcelona in 1997 which led to Real Madrid winning the league. Home games are played at the 30,000-capacity José Rico Pérez Stadium. The city's other club, Alicante CF, who played in the Third Division, was dissolved in 2014 due to economic problems. They were replaced in 2017 by newly formed club CF Intercity, who currently compete in the Primera Federación - Group 2 and play at the Estadio Antonio Solana. Basketball club (HLA Alicante) Lucentum Alicante participates in the Spanish basketball league. It plays in the Centro de Tecnificación de Alicante. Alicante serves as headquarters and the starting point of the Volvo Ocean Race, a yacht race around the world. The latest race sailed in October 2017. Twin towns – sister cities. Alicante is twinned with:
2421
Albrecht Achilles
Albrecht Achilles may refer to:
2422
Ann Widdecombe
Ann Noreen Widdecombe (born 4 October 1947) is a British politician and television personality. She was Member of Parliament (MP) for Maidstone and The Weald, and the former Maidstone constituency, from 1987 to 2010 and Member of the European Parliament (MEP) for South West England from 2019 to 2020. Originally a member of the Conservative Party, she was a member of the Brexit Party from 2019 until it was renamed Reform UK in 2021; she rejoined Reform UK in 2023. Born in Bath, Somerset, Widdecombe read Latin at the University of Birmingham and later studied philosophy, politics and economics at Lady Margaret Hall, Oxford. She is a religious convert from Anglicanism to Roman Catholicism, and was a member of the Conservative Christian Fellowship. She served as Minister of State for Employment from 1994 to 1995 and Minister of State for Prisons from 1995 to 1997. She later served in the Shadow Cabinet of William Hague as Shadow Secretary of State for Health from 1998 to 1999 and Shadow Home Secretary from 1999 to 2001. She was appointed to the Privy Council in 1997. Widdecombe stood down from the House of Commons at the 2010 general election. Since 2002, she has made numerous television and radio appearances, including as a television presenter. A prominent Eurosceptic, in 2016 she supported the Vote Leave campaign to withdraw the United Kingdom from the European Union (EU). Widdecombe returned to politics as the lead candidate for the Brexit Party in South West England at the 2019 European Parliament election, winning the seat in line with results nationally, serving until the country left the EU on 31 January 2020. In the general election of December 2019 – as with all other candidates for the Commons fielded by the Brexit Party – she did not win the seat she contested (Plymouth Sutton and Devonport), but retained her deposit and came third. Ideologically, Widdecombe identifies herself as a social conservative and stresses the importance of traditional values and conservatism. As a member of the House of Commons, she opposed the legality of abortion, opposed granting LGBT people legal rights such as the same age of consent as heterosexuals and the repeal of Section 28, and supported the retention of blasphemy laws. She supported reintroduction of the death penalty for murder, though more narrowly applied than previously. She has a history of supporting rigorous laws on animal protection and opposition to fox hunting. Early life. Born in Bath, Somerset, Widdecombe is the daughter of Rita Noreen ("née" Plummer; 1911–2007) and Ministry of Defence civil servant James Murray Widdecombe. Widdecombe's maternal grandfather, James Henry Plummer, was born to a Catholic family of English descent in Crosshaven, County Cork, Ireland in 1874. She attended the Royal Naval School in Singapore, and La Sainte Union Convent School in Bath. She then read Latin at the University of Birmingham and later attended Lady Margaret Hall, Oxford, to read philosophy, politics and economics. In 1971, she was the secretary of the Oxford Union for one term, and became its treasurer for one term in 1972. While studying at Oxford, she lived next door to Mary Archer, Edwina Currie, and Gyles Brandreth's wife Michèle Brown. She worked for Unilever (1973–75) and then as an administrator at the University of London (1975–87) before entering Parliament. Political career. In 1974, Widdecombe was personal assistant to Michael Ancram in the February and October general elections of that year. From 1976 to 1978, Widdecombe was a councillor on Runnymede District Council in Surrey. She contested the seat of Burnley in Lancashire in the 1979 general election and then, against David Owen, the Plymouth Devonport seat in the 1983 general election. In 1983 she, with Lady Olga Maitland and Virginia Bottomley, co-founded Women and Families for Defence, a group founded in opposition to the anti-nuclear Greenham Common Women's Peace Camp. Widdecombe was first elected to the House of Commons, for the Conservatives, in the 1987 general election as member for the constituency of Maidstone (which became Maidstone and The Weald in 1997). In government. Widdecombe joined Prime Minister John Major's government as Parliamentary Under-Secretary of State for Social Security in 1990. In 1993, she was moved to the Department of Employment, and she was promoted to Minister of State the following year. In 1995, she joined the Home Office as Minister of State for Prisons and visited every prison in the UK. In 1996, Widdecombe, as prisons minister, defended the Government's policy to shackle pregnant prisoners with handcuffs and chains when in hospital receiving prenatal care. Widdecombe told the Commons that the restrictions were needed to prevent prisoners from escaping the hospital. "Some MPs may like to think that a pregnant woman would not or could not escape. Unfortunately this is not true. The fact is that hospitals are not secure places in which to keep prisoners, and since 1990, 20 women have escaped from hospitals". Jack Straw, Labour's Home Affairs spokesman at the time, said it was "degrading and unnecessary" for a woman to be shackled at any stage. Shadow Cabinet. In May 1997, in the context of an inquiry into a series of prison escapes, Widdecombe remarked of former Home Secretary Michael Howard, under whom she had served, that there is "something of the night" about him. This much-quoted comment is thought to have contributed to the failure of Howard's 1997 campaign for the Conservative Party leadership, a sentiment shared by both Howard himself and Widdecombe. It led to him being caricatured as a vampire, in part due to his Romanian ancestry. Howard became the official party leader in 2003, and Widdecombe then stated, "I explained fully what my objections were in 1997 and I do not retract anything I said then. But ... we have to look to the future and not the past." After the Conservative landslide defeat at the 1997 general election, she served as Shadow Health Secretary between 1998–1999 and later as Shadow Home Secretary from 1999 to 2001 under the leadership of William Hague. Leadership contest and backbenches. During the 2001 Conservative leadership election, she could not find sufficient support amongst Conservative MPs for her leadership candidacy. She first supported Michael Ancram, who was eliminated in the first round, and then Kenneth Clarke, who lost in the final round. She afterwards declined to serve in Iain Duncan Smith's Shadow Cabinet (although she indicated on the television programme "When Louis Met...", prior to the leadership contest, that she wished to retire to the backbenches anyway). In 2001, when Michael Portillo was running for leader of the Conservative Party, Widdecombe described him and his allies as "backbiters" due to his alleged destabilising influence under Hague. She went on to say that, should he be appointed leader, she would never give him her allegiance. This was amidst a homophobic campaign led by socially conservative critics of Portillo. In the 2005 leadership election, she initially supported Kenneth Clarke again. Once he was eliminated, she turned support towards Liam Fox. Following Fox's subsequent elimination, she took time to reflect before finally declaring for David Davis. She expressed reservations over the eventual winner David Cameron, feeling that he did not, like the other candidates, have a proven track record, and she was later a leading figure in parliamentary opposition to his A-List policy. At the October 2006 Conservative Conference, she was Chief Dragon in a political version of the television programme "Dragons' Den", in which A-list candidates were invited to put forward a policy proposal, which was then torn apart by her team of Rachel Elnaugh, Oliver Letwin and Michael Brown. In an interview with "Metro" in September 2006 she stated that if Parliament were of a normal length, it was likely she would retire at the next general election. She confirmed her intention to stand down to "The Observer"'s Pendennis diary in September 2007, and again in October 2007 after Prime Minister Gordon Brown quashed speculation of an autumn 2007 general election. In November 2006, she moved into the house of an Islington Labour Councillor to experience life on a council estate, her response to her experience being "Five years ago I made a speech in the House of Commons about the forgotten decents. I have spent the last week on estates in the Islington area finding out that they are still forgotten." In 2007 Widdecombe was one of the 98 MPs who voted to keep their expense details secret. When the expenses claims were leaked, however, Widdecombe was described by "The Daily Telegraph" as one of the "saints" amongst all MPs. In May 2009, following the resignation of Michael Martin as Speaker of the House of Commons, it was reported that Widdecombe was gathering support for election as interim Speaker until the next general election. On 11 June 2009, she confirmed her bid to be the Speaker, but came last in the second ballot and was eliminated. Widdecombe retired from politics at the 2010 general election. It was rumoured that she would be a Conservative candidate for Police and Crime Commissioner in 2012, but she refused. She since spoke about her opposition to the Coalition Government and her surprise at not being given a peerage by David Cameron. In 2016, she supported Brexit during the 2016 EU referendum and, following the resignation of David Cameron, endorsed Andrea Leadsom in her candidacy for election for the leadership of the governing Conservative Party. Return to politics – Brexit Party. In 2019 she returned to politics as a candidate for the Brexit Party in the European parliament elections in South West England, which were held on 23 May, though she maintained that she would still vote for the Conservatives in the local elections that took place three weeks before. She was expelled by the Conservative Party immediately after her announcement. Widdecombe had considered joining the Brexit Party in March 2019, but joined later, in May. Widdecombe said that her decision to stand resulted from the Government's failure to deliver Britain's departure from the EU on schedule. "Both major parties need a seismic shock," she said, "to see the extent of public disgust." She subsequently won her seat. Widdecombe became a member of the European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE). Widdecombe stood as a candidate for Plymouth Sutton and Devonport in the 2019 UK general election, coming a distant third but just retaining her deposit with 5.5% of the vote. Nigel Farage said that she was told by the Conservative Party that she would be part of their Brexit negotiations if she stood down as a candidate. Political views. Social issues. As an MP, Widdecombe expressed socially conservative views, including opposition to abortion; it was understood during her time in frontline politics that she would not become Health Secretary as long as this involved responsibility for abortions. Although a committed Christian, she characterised the issue as one of life and death on which her view had been the same when she was agnostic and was a member of the Society for the Protection of Unborn Children while studying at Oxford. During Parliament, Widdecombe was a member of the Pro-Life All Party Parliamentary Group, which met with SPUC over concerns the organisation's more strident approach to abortion policy could alienate Protestant and atheist supporters. She converted from the Church of England (CoE) to the Roman Catholic Church following the CoE decision to ordinate women as priests. Criminal justice. In her speech at the 2000 Conservative conference, she called for a zero tolerance policy of prosecution, with the punishment of £100 fines for users of cannabis. This was well received by rank-and-file Conservative delegates. Over the years, Widdecombe has expressed her support for a reintroduction of the death penalty, which was abolished in the UK in 1965. She notably spoke of her support for its reintroduction for the worst cases of murder in the aftermath of the murder of two 10-year-old girls from Soham, Cambridgeshire, in August 2002, arguing that in the five years up to 1970 when the death penalty was suspended, the national murder rate had more than doubled. Environmental and science issues. She is a committed animal lover and one of the several Conservative MPs to have consistently voted for the ban on the hunting of foxes. Widdecombe was among more than 20 high-profile people who signed a letter to Members of Parliament in 2015 to oppose David Cameron's plan to amend the Hunting Act 2004. In 2007, she wrote that she did not want to belittle the issue of climate change, but was sceptical of the claims that specific actions would prevent catastrophe. In 2008, she wrote that her doubts had been "crystalised" by Nigel Lawson's book "An Appeal to Reason"; in 2014, she likened Lawson's difficulty in getting the book published to the book-burnings in Nazi Germany. Later in 2008, Widdecombe claimed that the "science of climate change is robustly disputed", then, in 2009, that "There is no climate change, hasn't anybody looked out of their window recently?" She was one of the five MPs who voted against the Climate Change Act 2008. The previous year, she voted to support a parliamentary motion in favour of homeopathy, disagreeing with the Science and Technology Committee's Report on the subject. LGBT rights. Widdecombe supported the partial decriminalisation of homosexuality in 1967 in England and Wales. After that, Widdecombe consistently opposed further reforms while in Parliament. Out of the 17 parliamentary votes between 1998 and 2008 considered by the Public Whip website to concern equal rights for homosexuals, Widdecombe took the opposing position in 15 cases, not being present at the other two votes. In 1999, Widdecombe stated that "I do not think that [homosexuality] can be promoted as an equally valid lifestyle to [heterosexual] marriage, but I would say the same about irregular heterosexual arrangements." She has consistently argued against an equal age of consent for same-sex relationships, voting against a 1994 act (which would have reduced the age of consent for some male-male sexual activity from 21 to 18), and in 1998 (arguing against a further reduction from 18 to 16, which later occurred in 2000). On the latter act, she wrote in "The Mail on Sunday" that "one of the sundry horrors for which this Government is likely to be remembered will be that it gave its imprimatur to sodomy at 16", She later said in 2000: "I do not believe that issues of equality should override the imperatives of protecting the young." In 2003, Widdecombe opposed the repeal of Section 28 of the Local Government Act 1988. In 2012, Widdecombe voiced support in the "Daily Express" for the practise of conversion therapy, which claims to change the orientation of homosexuals. Widdecombe has also expressed her opposition to same-sex marriage, introduced by David Cameron's government in 2014, arguing that "the state must have a preferred model" which is "a union that is generally open to procreation". She also opposes gender self-identification for transgender people. In 2020, she expressed her opposition to same-sex dancing on "Strictly Come Dancing", saying: "I don't think it is what viewers of "Strictly", especially families, are looking for. But that's up to the audience and the programme." Controversies. In 2009, she partially defended Carol Thatcher's use of the racial slur golliwog on "Any Questions?", saying: "There is a generation to whom a golliwog is merely a toy, a generation which was much endeared by its golliwogs which grew up with them on jam jars ... and there is a generation, a new generation for whom that word is deeply offensive and one does have to make I think some allowance for the fact." In December 2019, leaked WhatsApp conversations to the "Plymouth Herald" between her and Brexit Party activists showed Widdecombe using the term amid rumours BP campaign funding was being diverted away from Plymouth ahead of the general election of that year. Widdecombe said: "Yes, I threw all my toys of the pram. Bears and gollywogs flying everywhere!!". In 2019 Widdecombe defended the comments she made in a 2012 article that supported "gay conversion" therapy. She told Sky News that science may yet "provide an answer" to the question of whether people can "switch sexuality". Following Widdecombe's apparent endorsement of conversion therapy, at least one venue, the Landmark theatre in Ilfracombe, Devon, cancelled a performance of her one-woman show. Widdecombe and two other Brexit Party figures were criticised for previous appearances on the David Icke-affiliated "Richie Allen Show", which has been accused of promoting Holocaust denial and antisemitic conspiracy theories about the Rothschild family and Zionism. Widdecombe appeared three times between August 2017 and April 2019 and was described as an "old friend of the show" by the host during one appearance. Widdecombe told "Jewish Chronicle" that she agreed to appear to discuss Brexit, and that she "had never heard of the "Richie Allen Show" until I agreed to go on" and distanced herself from its antisemitic content by, among other things, pointing to her membership of the Conservative Friends of Israel, B'nai B'rith event speeches, and her novel "An Act of Treachery", which she said is set during the Holocaust. Widdecombe was elected as a Member of the European Parliament for the Brexit Party on 23 May 2019 in the European elections. On 3 July 2019 she used her maiden speech in Strasbourg to compare Brexit to slaves revolting against their owners and to a colonised country rising up against occupying forces, a stance which was criticised by members of both the European Parliament and the British House of Commons. Media work and appearances. In 2002 she took part in the ITV programme "Celebrity Fit Club". Also in 2002 she took part in a Louis Theroux television documentary, depicting her life, both in and out of politics. In March 2004 she briefly became "The Guardian" newspaper's agony aunt, introduced with an Emma Brockes interview. In 2005 BBC Two showed six episodes of "The Widdecombe Project", an agony aunt television programme. In 2005, she appeared in a new series of "Celebrity Fit Club", this time as an agony aunt. Also in 2005, she presented the show "Ann Widdecombe to the Rescue" in which she acted as an agony aunt, dispensing advice to disputing families, couples, and others across the UK. In 2005, she appeared in a discussion programme on Five to discuss who had been England's greatest monarch since the Norman Conquest; her choice of monarch was Charles II. She was the guest host of news quiz "Have I Got News for You" twice, in 2006 and 2007. Her first appearance as guest host, in 2006, was widely regarded as a success. Following her second appearance, Widdecombe said she would never appear on the show again because of comments made by panellist Jimmy Carr which she considered filth, though she called regular panellists Ian Hislop and Paul Merton "the fastest wits in showbusiness". Merton later revealed that he thought Widdecombe had been "the worst ever presenter" of the show, particularly on her second appearance where Merton claimed she "thought she was Victoria Wood". In 2007 she awarded the "University Challenge" trophy to the winners. In the same year, she appeared in "The Sound of Drums", the 12th episode of the third series of the science-fiction drama "Doctor Who", endorsing the Master's Prime Minister campaign. In 2007 and 2008 Widdecombe fronted a television series called "Ann Widdecombe Versus", on ITV1, in which she spoke to various people about things related to her as an MP, with an emphasis on confronting those responsible for problems she wished to tackle. In 2007 she talked about prostitution, social benefits, and truancy. A fourth episode was screened on 18 September 2008 in which she travelled around London and Birmingham talking to girl gangs. In 2009, Widdecombe appeared with Archbishop John Onaiyekan in an "Intelligence Squared" debate in which they defended the motion that the Catholic Church was a force for good. Arguing against the motion were Stephen Fry and Christopher Hitchens, who won the debate overall. In October 2010, she appeared on BBC One's "Strictly Come Dancing", partnered by Anton du Beke, winning the support of some viewers despite low marks from the judges. After nine weeks of routines strongly flavoured by comedy, the couple was eliminated, in the bottom two. In 2011 Widdecombe played the Lord Mayoress in an episode of Sooty. In 2012, Widdecombe hosted the 30 one-hour episodes of "Cleverdicks", a quiz show for the Sky Atlantic channel. In April 2012 Widdecombe presented an hour-long documentary for BBC Radio 5 Live, "Drunk Again: Ann Widdecombe Investigates", looking at how the British attitude to alcohol consumption had changed over the previous few years. Widdecombe was in a "Strictly Come Dancing" special in Children in Need's 2012 appeal night. On 4 November 2012, Widdecombe guest-hosted one episode of BBC's "Songs of Praise" programme about singleness. In October 2014, she appeared in the BBC series "Celebrity Antiques Road Trip" with expert Mark Stacey. Widdecombe took part in a four-part BBC One television series "24 Hours in the Past", along with Colin Jackson, Alistair McGowan, Miquita Oliver, Tyger Drew-Honey and Zoe Lucker in April and May 2015, involving experiencing life as workers in a dustyard, coachhouse, pottery, and as workhouse inmates in 1840s Britain. She took part in an episode of "Tipping Point: Lucky Stars" in 2016. In 2017, Widdecombe took part in ITV's "Sugar Free Farm". In January 2018, Widdecombe participated in the Celebrity Big Brother twenty-first series; she was criticised over her comments regarding the Harvey Weinstein controversy and comments perceived to be anti-LGBT to her fellow housemates, most notably to drag queen Courtney Act (Shane Jenek). She finished the competition in second place, behind Jenek. In 2019 Widdecombe appeared on the new celebrity version of "The Crystal Maze", where alongside Sunetra Sarker, Wes Nelson, Matthew Wright and Nikki Sanderson, she won money for Stand Up to Cancer. In 2020 Widdecombe travelled to Norway for three days to visit Halden Prison, for the documentary, of "The World’s Most Luxurious Prison." Stage acting career. Following her retirement, Widdecombe made her stage debut, on 9 December 2011, at the Orchard Theatre, Dartford in the Christmas pantomime "Snow White and the Seven Dwarfs", alongside "Strictly Come Dancing" judge Craig Revel Horwood. In April 2012, she had a ten-minute non-singing cameo part in Gaetano Donizetti's comic opera "La Fille du Regiment", playing the Duchesse de Crackentorp. Widdecombe reprised her pantomime performance, again with Horwood, at the Swan Theatre, High Wycombe in December 2012. Widdecombe stepped in at short notice to play the Evil Queen in "Snow White and the Seven Dwarfs", which was published by the Brothers Grimm in 1812, at Bridlington Spa in December 2016. She replaced injured Lorraine Chase. This was Widdecombe's first appearance as a pantomime 'baddie'; a role she told the press she had always hoped for. In December 2017 Widdecombe played the Empress of China in the pantomime "Aladdin" at the Marina Theatre in Lowestoft. Personal life and family. Until her retirement following the 2010 general election, Widdecombe divided her time between her two homes – one in London and one in the countryside village of Sutton Valence, Kent, in her constituency. She sold both upon retiring at the next general election. She shared her home in London with her widowed mother, Rita Widdecombe, until Rita's death, on 25 April 2007, aged 95. In March 2008, she bought a house in Haytor Vale, on Dartmoor in Devon, where she retired. Her brother, Malcolm (1937–2010), who was an Anglican canon in Bristol, retired in May 2009 and died in October 2010. Her nephew, Roger Widdecombe, is an Anglican priest. She has never married nor had any children. In November 2007 on BBC Radio 4 she described how a journalist once produced a profile on her with the assumption that she had had at least "one sexual relationship", to which Widdecombe replied: "Be careful, that's the way you get sued". When interviewer Jenni Murray asked if she had ever had a sexual relationship, Widdecombe laughed "it's nobody else's business". A 2001 report in "The Guardian" said that she had had a three-year romance while studying at the University of Oxford; Widdecombe confirmed this in January 2018 on the UK reality TV show "Big Brother", explaining that she had ended the romance in order to prioritise her career. Widdecombe has a fondness for cats and many other animals such as foxes; a section of her website, the "Widdyweb", is about the pet cats she has lived with. Widdecombe adopted two goats at the Buttercups Goat Sanctuary in Boughton Monchelsea near Maidstone. In an interview, Widdecombe talked about her appreciation of music, despite describing herself as "pretty well tone-deaf". Outside politics she writes novels, and a weekly column for the "Daily Express". In January 2011 Widdecombe was President of the North of England Education Conference in Blackpool, and gave a speech there supporting selective education and opposing the ban on new grammar schools being built. She also became a patron of The Grace Charity for M.E. In April 2012 Widdecom said that she was writing her own autobiography, which she described as "rude about all and sundry, but an amount of truth is always necessary". Widdecombe is a Patron of the charity Safe Haven for Donkeys in the Holy Land (SHADH) and in 2014 visited the SHADH Donkey Sanctuary in the West Bank. Religious views. Widdecombe became an Anglican in her 30s, after a period of being an agnostic following her departure from religious schooling. She converted to Catholicism in 1993 after leaving the Church of England, explaining to reporters from the "New Statesman": In October 2006, she pledged to boycott British Airways for suspending a worker who refused to hide her Christian cross, until the company reversed the suspension. In 2010, Widdecombe turned down the offer to be Britain's next ambassador to the Holy See, being prevented from accepting by suffering a detached retina. She was made a Dame of the Order of St. Gregory the Great by Pope Benedict XVI for services to politics and public life on 31 January 2013.
2425
Aurangzeb
Muhi al-Din Muhammad (; – 3 March 1707), commonly known as () and by his regnal title Alamgir I (), was the sixth emperor of the Mughal Empire, ruling from July 1658 until his death in 1707. Under his emperorship, the Mughals reached their greatest extent with their territory spanning nearly the entirety of the Indian subcontinent. Widely considered to be the last effective Mughal ruler, Aurangzeb compiled the "Fatawa 'Alamgiri" and was amongst the few monarchs to fully establish Sharia and Islamic economics throughout the Indian subcontinent. Aurangzeb belonged to the aristocratic Timurid dynasty, held administrative and military posts under his father Shah Jahan () and gained recognition as an accomplished military commander. Aurangzeb served as the viceroy of the Deccan in 1636–1637 and the governor of Gujarat in 1645–1647. He jointly administrated the provinces of Multan and Sindh in 1648–1652 and continued expeditions into the neighboring Safavid territories. In September 1657, Shah Jahan nominated his eldest and liberalist son Dara Shikoh as his successor, a move repudiated by Aurangzeb, who proclaimed himself emperor in February 1658. In April 1658, Aurangzeb defeated the allied army of Shikoh and the Kingdom of Marwar at the battle of Dharmat. Aurangzeb's decisive victory at the battle of Samugarh in May 1658 cemented his sovereignty and his suzerainty was acknowledged throughout the Empire. After Shah Jahan recovered from illness in July 1658, Aurangzeb declared him incompetent to rule and imprisoned his father in the Agra Fort. Under Aurangzeb's emperorship, the Mughals reached its greatest extent with their territory spanning nearly the entire Indian subcontinent. His reign is characterized by a period of rapid military expansion, with several dynasties and states being overthrown by the Mughals. His conquests acquired him the regnal title "Alamgir" ('Conqueror'). The Mughals also surpassed Qing China as the world's largest economy and biggest manufacturing power. The Mughal military gradually improved and became one of the strongest armies in the world. A staunch Muslim, Aurangzeb is credited with the construction of numerous mosques and patronizing works of Arabic calligraphy. He successfully imposed the Fatawa 'Alamgiri as the principal regulating body of the empire and prohibited religiously forbidden activities in Islam. Although Aurangzeb suppressed several local revolts, he maintained cordial relations with foreign governments. Aurangzeb is generally considered by historians to be one of the greatest emperors in Indian history. While there is considerable admiration for Aurangzeb in the contemporary sources, he has been criticized over some of his policies. Early life. Aurangzeb was born in Dahod in . His father was Emperor Shah Jahan (), who hailed from the Mughal house of the Timurid dynasty. The latter was descended from Emir Timur (), the founder of the Timurid Empire. Aurangzeb's mother Mumtaz Mahal was the daughter of the Persian noblemen Asaf Khan, who was the youngest son of vizier Mirza Ghiyas. Aurangzeb was born during the reign of his patrilineal grandfather Jahangir (), the fourth emperor of the Mughal Empire. In June 1626, after an unsuccessful rebellion by his father, eight-year-old Aurangzeb and his brother Dara Shikoh were sent to the Mughal court in Lahore as hostages of their grandfather Jahangir and his wife, Nur Jahan, as part of their father's pardon deal. After Jahangir died in 1627, Shah Jahan emerged victorious in the ensuing war of succession to the Mughal throne. Aurangzeb and his brother were consequently reunited with Shah Jahan in Agra. Aurangzeb received a Mughal princely education covering subjects like combat, military strategy, and administration. His curriculum also included scholarly areas like Islamic studies and Turkic and Persian literature. Aurangzeb grew up fluent in the Hindi of his time. On 28 May 1633, Aurangzeb escaped death when a powerful war elephant stampeded through the Mughal imperial encampment. He rode against the elephant and struck its trunk with a lance, and successfully defended himself from being crushed. Aurangzeb's valour was appreciated by his father who conferred him the title of "Bahadur" (Brave) and had him weighed in gold and presented gifts worth Rs. 200,000. This event was celebrated in Persian and Urdu verses, and Aurangzeb said: Early military campaigns and administration. Bundela War. Aurangzeb was nominally in charge of the force sent to Bundelkhand with the intent of subduing the rebellious ruler of Orchha, Jhujhar Singh, who had attacked another territory in defiance of Shah Jahan's policy and was refusing to atone for his actions. By arrangement, Aurangzeb stayed in the rear, away from the fighting, and took the advice of his generals as the Mughal Army gathered and commenced the siege of Orchha in 1635. The campaign was successful and Singh was removed from power. Viceroy of the Deccan. Aurangzeb was appointed viceroy of the Deccan in 1636. After Shah Jahan's vassals had been devastated by the alarming expansion of Ahmednagar during the reign of the Nizam Shahi boy-prince Murtaza Shah III, the emperor dispatched Aurangzeb, who in 1636 brought the Nizam Shahi dynasty to an end. In 1637, Aurangzeb married the Safavid princess Dilras Banu, posthumously known as Rabia-ud-Daurani. She was his first wife and chief consort as well as his favourite. He also had an infatuation with a slave girl, Hira Bai, whose death at a young age greatly affected him. In his old age, he was under the charms of his concubine, Udaipuri Mahal. The latter had formerly been a companion to Dara Shukoh. In the same year, 1637, Aurangzeb was placed in charge of annexing the small Rajput kingdom of Baglana, which he did with ease. In 1638, Aurangzeb married Nawab Bai, later known as Rahmat al-Nisa. That same year, Aurangzeb dispatched an army to subdue the Portuguese coastal fortress of Daman, however his forces met stubborn resistance and were eventually repulsed at the end of a long siege. At some point, Aurangzeb married Aurangabadi Mahal, who was a Circassian or Georgian. In 1644, Aurangzeb's sister, Jahanara, was burned when the chemicals in her perfume were ignited by a nearby lamp while in Agra. This event precipitated a family crisis with political consequences. Aurangzeb suffered his father's displeasure by not returning to Agra immediately but rather three weeks later. Shah Jahan had been nursing Jahanara back to health in that time and thousands of vassals had arrived in Agra to pay their respects. Shah Jahan was outraged to see Aurangzeb enter the interior palace compound in military attire and immediately dismissed him from his position of viceroy of the Deccan; Aurangzeb was also no longer allowed to use red tents or to associate himself with the official military standard of the Mughal emperor. Other sources tell us that Aurangzeb was dismissed from his position because Aurangzeb left the life of luxury and became a "faqir". Governor of Gujarat. In 1645, he was barred from the court for seven months and mentioned his grief to fellow Mughal commanders. Thereafter, Shah Jahan appointed him governor of Gujarat. His rule in Gujarat was marked with religious disputes but he was rewarded for bringing stability. Governor of Balkh. In 1647, Shah Jahan moved Aurangzeb from Gujarat to be governor of Balkh, replacing a younger son, Murad Baksh, who had proved ineffective there. The area was under attack from Uzbek and Turkmen tribes. While the Mughal artillery and muskets were a formidable force, so too were the skirmishing skills of their opponents. The two sides were in stalemate and Aurangzeb discovered that his army could not live off the land, which was devastated by war. With the onset of winter, he and his father had to make a largely unsatisfactory deal with the Uzbeks, giving away territory in exchange for nominal recognition of Mughal sovereignty. The Mughal force suffered still further with attacks by Uzbeks and other tribesmen as it retreated through the snow to Kabul. By the end of this two-year campaign, into which Aurangzeb had been plunged at a late stage, a vast sum of money had been expended for little gain. Further inauspicious military involvements followed, as Aurangzeb was appointed governor of Multan and Sindh. His efforts in 1649 and 1652 to dislodge the Safavids at Kandahar, which they had recently retaken after a decade of Mughal control, both ended in failure as winter approached. The logistical problems of supplying an army at the extremity of the empire, combined with the poor quality of armaments and the intransigence of the opposition have been cited by John Richards as the reasons for failure, and a third attempt in 1653, led by Dara Shikoh, met with the same outcome. 2nd term as Viceroy of the Deccan. Aurangzeb became viceroy of the Deccan again after he was replaced by Dara Shukoh in the attempt to recapture Kandahar. Aurangzeb regretted this and harboured feelings that Shikoh had manipulated the situation to serve his own ends. Aurangbad's two "jagirs" (land grants) were moved there as a consequence of his return and, because the Deccan was a relatively impoverished area, this caused him to lose out financially. So poor was the area that grants were required from Malwa and Gujarat in order to maintain the administration and the situation caused ill-feeling between father and son. Shah Jahan insisted that things could be improved if Aurangzeb made efforts to develop cultivation. Aurangzeb appointed Murshid Quli Khan to extend to the Deccan the "zabt" revenue system used in northern India. Murshid Quli Khan organised a survey of agricultural land and a tax assessment on what it produced. To increase revenue, Murshid Quli Khan granted loans for seed, livestock, and irrigation infrastructure. The Deccan returned to prosperity, Aurangzeb proposed to resolve the situation by attacking the dynastic occupants of Golconda (the Qutb Shahis) and Bijapur (the Adil Shahis). As an adjunct to resolving the financial difficulties, the proposal would also extend Mughal influence by accruing more lands. Aurangzeb advanced against the Sultan of Bijapur and besieged Bidar. The "Kiladar" (governor or captain) of the fortified city, Sidi Marjan, was mortally wounded when a gunpowder magazine exploded. After twenty-seven days of hard fighting, Bidar was captured by the Mughals and Aurangzeb continued his advance. Again, he was to feel that Dara had exerted influence on his father: believing that he was on the verge of victory in both instances, Aurangzeb was frustrated that Shah Jahan chose then to settle for negotiations with the opposing forces rather than pushing for complete victory. War of succession. The four sons of Shah Jahan all held governorships during their father's reign. The emperor favoured the eldest, Dara Shikoh. This had caused resentment among the younger three, who sought at various times to strengthen alliances between themselves and against Dara. There was no Mughal tradition of primogeniture, the systematic passing of rule, upon an emperor's death, to his eldest son. Instead it was customary for sons to overthrow their father and for brothers to war to the death among themselves. Historian Satish Chandra says that "In the ultimate resort, connections among the powerful military leaders, and military strength and capacity [were] the real arbiters". The contest for power was primarily between Dara Shikoh and Aurangzeb because, although all four sons had demonstrated competence in their official roles, it was around these two that the supporting cast of officials and other influential people mostly circulated. There were ideological differences — Dara was an intellectual and a religious liberal in the mould of Akbar, while Aurangzeb was much more conservative — but, as historians Barbara D. Metcalf and Thomas R. Metcalf say, "To focus on divergent philosophies neglects the fact that Dara was a poor general and leader. It also ignores the fact that factional lines in the succession dispute were not, by and large, shaped by ideology." Marc Gaborieau, professor of Indian studies at l'École des Hautes Études en Sciences Sociales, explains that "The loyalties of [officials and their armed contingents] seem to have been motivated more by their own interests, the closeness of the family relation and above all the charisma of the pretenders than by ideological divides." Muslims and Hindus did not divide along religious lines in their support for one pretender or the other nor, according to Chandra, is there much evidence to support the belief that Jahanara and other members of the royal family were split in their support. Jahanara, certainly, interceded at various times on behalf of all of the princes and was well-regarded by Aurangzeb even though she shared the religious outlook of Dara. In 1656, a general under Qutb Shahi dynasty named Musa Khan led an army of 12,000 musketeers to attack Aurangzeb, who was besieging Golconda Fort. Later in the same campaign, Aurangzeb, in turn, rode against an army consisting of 8,000 horsemen and 20,000 Karnataki musketeers. Having made clear that he wanted Dara to succeed him, Shah Jahan became ill with stranguary in 1657 and was closeted under the care of his favourite son in the newly built city of Shahjahanabad (Old Delhi). Rumours of the death of Shah Jahan abounded and the younger sons were concerned that Dara might be hiding it for Machiavellian reasons. Thus, they took action: Shah Shuja In Bengal, where he had been governor since 1637, Prince Muhammad Shuja crowned himself King at RajMahal, and brought his cavalry, artillery and river flotilla upriver towards Agra. Near Varanasi his forces confronted a defending army sent from Delhi under the command of Prince Sulaiman Shukoh, son of Dara Shukoh, and Raja Jai Singh while Murad did the same in his governorship of Gujarat and Aurangzeb did so in the Deccan. It is not known whether these preparations were made in the mistaken belief that the rumours of death were true or whether the challengers were just taking advantage of the situation. After regaining some of his health, Shah Jahan moved to Agra and Dara urged him to send forces to challenge Shah Shuja and Murad, who had declared themselves rulers in their respective territories. While Shah Shuja was defeated at Banares in February 1658, the army sent to deal with Murad discovered to their surprise that he and Aurangzeb had combined their forces, the two brothers having agreed to partition the empire once they had gained control of it. The two armies clashed at Dharmat in April 1658, with Aurangzeb being the victor. Shuja was being chased through Bihar and the victory of Aurangzeb proved this to be a poor decision by Dara Shikoh, who now had a defeated force on one front and a successful force unnecessarily pre-occupied on another. Realising that his recalled Bihar forces would not arrive at Agra in time to resist the emboldened Aurangzeb's advance, Dara scrambled to form alliances in order but found that Aurangzeb had already courted key potential candidates. When Dara's disparate, hastily concocted army clashed with Aurangzeb's well-disciplined, battle-hardened force at the battle of Samugarh in late May, neither Dara's men nor his generalship were any match for Aurangzeb. Dara had also become over-confident in his own abilities and, by ignoring advice not to lead in battle while his father was alive, he cemented the idea that he had usurped the throne. "After the defeat of Dara, Shah Jahan was imprisoned in the fort of Agra where he spent eight long years under the care of his favourite daughter Jahanara." Aurangzeb then broke his arrangement with Murad Baksh, which probably had been his intention all along. Instead of looking to partition the empire between himself and Murad, he had his brother arrested and imprisoned at Gwalior Fort. Murad was executed on 4 December 1661, ostensibly for the murder of the "diwan" of Gujarat sometime earlier. The allegation was encouraged by Aurangzeb, who caused the "diwan's" son to seek retribution for the death under the principles of Sharia law. Meanwhile, Dara gathered his forces, and moved to the Punjab. The army sent against Shuja was trapped in the east, its generals Jai Singh and Dilir Khan submitted to Aurangzeb, but Dara's son, Suleiman Shikoh, escaped. Aurangzeb offered Shah Shuja the governorship of Bengal. This move had the effect of isolating Dara Shikoh and causing more troops to defect to Aurangzeb. Shah Shuja, who had declared himself emperor in Bengal began to annex more territory and this prompted Aurangzeb to march from Punjab with a new and large army that fought during the battle of Khajwa, where Shah Shuja and his chain-mail armoured war elephants were routed by the forces loyal to Aurangzeb. Shah Shuja then fled to Arakan (in present-day Burma), where he was executed by the local rulers. With Shuja and Murad disposed of, and with his father immured in Agra, Aurangzeb pursued Dara Shikoh, chasing him across the north-western bounds of the empire. Aurangzeb claimed that Dara was no longer a Muslim and accused him of poisoning the Mughal Grand Vizier Saadullah Khan. After a series of battles, defeats and retreats, Dara was betrayed by one of his generals, who arrested and bound him. In 1658, Aurangzeb arranged his formal coronation in Delhi. On 10 August 1659, Dara was executed on grounds of apostasy and his head was sent to Shahjahan. The first prominent execution of Aurangzeb was that of his brother Prince Dara Shikoh, who was accused of being influenced by Hinduism although some sources argue it was done for political reasons. Aurangzeb had his allied brother Prince Murad Baksh held for murder, judged and then executed. Aurangzeb is accused of poisoning his imprisoned nephew Sulaiman Shikoh. Having secured his position, Aurangzeb confined his frail father at the Agra Fort but did not mistreat him. Shah Jahan was cared for by Jahanara and died in 1666. Reign. Bureaucracy. Aurangzeb's imperial bureaucracy employed significantly more Hindus than that of his predecessors. Between 1679 and 1707, the number of Hindu officials in the Mughal administration rose by half, to represent 31.6% of Mughal nobility, the highest in the Mughal era. Many of them were Marathas and Rajputs, who were his political allies. However, Aurangzeb encouraged high ranking Hindu officials to convert to Islam. Economy. Under his reign, the Mughal Empire contributed to the world's GDP by nearly 25%, surpassing Qing China, making it the world's largest economy and biggest manufacturing power, more than the entirety of Western Europe, and its largest and wealthiest subdivision, the Bengal Subah, signaled proto-industrialization. Establishment of Islamic law. Aurangzeb was an orthodox Muslim ruler. Subsequent to the policies of his three predecessors, he endeavored to make Islam a dominant force in his reign. However these efforts brought him into conflict with the forces that were opposed to this revival. Aurangzeb was a follower of the Mujaddidi Order and a disciple of the son of the Punjabi saint, Ahmad Sirhindi. He sought to establish Islamic rule as instructed and inspired by him. Historian Katherine Brown has noted that "The very name of Aurangzeb seems to act in the popular imagination as a signifier of politico-religious bigotry and repression, regardless of historical accuracy." The subject has also resonated in modern times with popularly accepted claims that he intended to destroy the Bamiyan Buddhas. As a political and religious conservative, Aurangzeb chose not to follow the secular-religious viewpoints of his predecessors after his ascension. He made no mention of the Persian concept of kinship, the Farr-i-Aizadi, and based his rule on the Quranic concept of kingship. Shah Jahan had already moved away from the liberalism of Akbar, although in a token manner rather than with the intent of suppressing Hinduism, and Aurangzeb took the change still further. Though the approach to faith of Akbar, Jahangir and Shah Jahan was more syncretic than Babur, the founder of the empire, Aurangzeb's position is not so obvious. His emphasis on sharia competed, or was directly in conflict, with his insistence that "zawabit" or secular decrees could supersede sharia. The chief qazi refusing to crown him in 1659, Aurangzeb had a political need to present himself as a "defender of the sharia" due to popular opposition to his actions against his father and brothers. Despite claims of sweeping edicts and policies, contradictory accounts exist. Historian Katherine Brown has argued that Aurangzeb never imposed a complete ban on music. He sought to codify Hanafi law by the work of several hundred jurists, called Fatawa 'Alamgiri. It is possible the War of Succession and continued incursions combined with Shah Jahan's spending made cultural expenditure impossible. He learnt that at Multan, Thatta, and particularly at Varanasi, the teachings of Hindu Brahmins attracted numerous Muslims. He ordered the subahdars of these provinces to demolish the schools and the temples of non-Muslims. Aurangzeb also ordered subahdars to punish Muslims who dressed like non-Muslims. The executions of the antinomian Sufi mystic Sarmad Kashani and the ninth Sikh Guru Tegh Bahadur bear testimony to Aurangzeb's religious policy; the former was beheaded on multiple accounts of heresy, the latter, according to Sikhs, because he objected to Aurangzeb's forced conversions. Aurangzeb had also banned the celebration of the Zoroastrian festival of Nauroz along with other un-Islamic ceremonies, and encouraged conversions to Islam; instances of persecution against particular Muslim factions were also reported. Taxation policy. Shortly after coming to power, Aurangzeb remitted more than 80 long-standing taxes affecting all of his subjects. In 1679, Aurangzeb chose to re-impose "jizya", a military tax on non-Muslim subjects in lieu of military service, after an abatement for a span of hundred years, in what was critiqued by many Hindu rulers, family-members of Aurangzeb, and Mughal court-officials. The specific amount varied with the socioeconomic status of a subject and tax-collection were often waived for regions hit by calamities; also, Brahmins, women, children, elders, the handicapped, the unemployed, the ill, and the insane were all perpetually exempted. The collectors were mandated to be Muslims. A majority of modern scholars reject that religious bigotry influenced the imposition; rather, realpolitik — economic constraints as a result of multiple ongoing battles and establishment of credence with the orthodox Ulemas — are held to be primary agents. Aurangzeb also enforced a higher tax burden on Hindu merchants at the rate of 5% (as against 2.5% on Muslim merchants), which led to considerable dislike of Aurangzeb's economic policies; a sharp turn from Akbar's uniform tax code. According to Marc Jason Gilbert, Aurangzeb ordered the jizya fees to be paid in person, in front of a tax collector, where the non Muslims were to recite a verse in the Quran which referred to their inferior status as non Muslims. This decision led to protests and lamentations among the masses as well as Hindu court officials. In order to meet state expenditures, Aurangzeb had ordered increases in land taxes; the burden of which fell heavily upon the Hindu Jats. The reimposition of the jizya encouraged Hindus to flee to areas under East India Company jurisdiction, under which policies of religious sufferance and pretermissions of religious taxes prevailed. Policy on temples and mosques. Aurangzeb issued land grants and provided funds for the maintenance of shrines of worship but also (often) ordered their destruction. Modern historians reject the thought-school of colonial and nationalist historians about these destruction being guided by religious zealotry; rather, the association of temples with sovereignty, power and authority is emphasized upon. Whilst constructing mosques were considered an act of royal duty to subjects, there are also several "firmans" in Aurangzeb's name, supporting temples, "maths", chishti shrines, and gurudwaras, including Mahakaleshwar temple of Ujjain, a gurudwara at Dehradun, Balaji temple of Chitrakoot, Umananda Temple of Guwahati and the Shatrunjaya Jain temples, among others. Numerous new temples were built, as well. Contemporary court-chronicles mention hundreds of temple which were demolished by Aurangzab or his chieftains, upon his order. In September 1669, he ordered the destruction of Vishvanath Temple at Varanasi, which was established by Raja Man Singh, whose grandson Jai Singh was believed to have facilitated Shivaji's escape. After the Jat rebellion in Mathura (early 1670), which killed the patron of the town-mosque, Aurangzeb suppressed the rebels and ordered for the city's Kesava Deo temple to be demolished, and replaced with an "Eidgah". In 1672–73, Aurangzeb ordered the resumption of all grants held by Hindus throughout the empire, though this was not followed absolutely in regions such as Gujarat, where lands granted in in'am to Charans were not affected. In around 1679, he ordered destruction of several prominent temples, including those of Khandela, Udaipur, Chittor and Jodhpur, which were patronaged by rebels. The Jama Masjid at Golkunda was similarly treated, after it was found that its ruler had built it to hide revenues from the state; however desecration of mosques are rare due to their complete lack of political capital contra temples. In an order specific to Benaras, Aurangzeb invokes Sharia to declare that Hindus will be granted state-protection and temples won't be razed (but prohibits construction of any new temple); other orders to similar effect can be located. Richard Eaton, upon a critical evaluation of primary sources, counts 15 temples to have been destroyed during Aurangzeb's reign. Ian Copland and others reiterate Iqtidar Alam Khan who notes that, overall, Aurangzeb built more temples than he destroyed. Execution of opponents. In 1689, the second Maratha Chhatrapati (King) Sambhaji was executed by Aurangzeb. In a sham trial, he was found guilty of murder and violence, atrocities against the Muslims of Burhanpur and Bahadurpur in Berar by Marathas under his command. In 1675 the 9th Sikh Guru Tegh Bahadur was arrested on orders by Aurangzeb and later execute after he refused to Convert in Islam, The 32nd Da'i al-Mutlaq (Absolute Missionary) of the Dawoodi Bohra sect of Musta'lī Islam Syedna Qutubkhan Qutubuddin was executed by Aurangzeb, then governor of Gujarat, for heresy; on 27 Jumadil Akhir 1056 AH (1648 AD), Ahmedabad, India. Expansion of the Mughal Empire. In 1663, during his visit to Ladakh, Aurangzeb established direct control over that part of the empire and loyal subjects such as Deldan Namgyal agreed to pledge tribute and loyalty. Deldan Namgyal is also known to have constructed a Grand Mosque in Leh, which he dedicated to Mughal rule. In 1664, Aurangzeb appointed Shaista Khan subedar (governor) of Bengal. Shaista Khan eliminated Portuguese and Arakanese pirates from the region, and in 1666 recaptured the port of Chittagong from the Arakanese king, Sanda Thudhamma. Chittagong remained a key port throughout Mughal rule. In 1685, Aurangzeb dispatched his son, Muhammad Azam Shah, with a force of nearly 50,000 men to capture Bijapur Fort and defeat Sikandar Adil Shah (the ruler of Bijapur) who refused to be a vassal. The Mughals could not make any advancements upon Bijapur Fort, mainly because of the superior usage of cannon batteries on both sides. Outraged by the stalemate Aurangzeb himself arrived on 4 September 1686 and commanded the siege of Bijapur; after eight days of fighting, the Mughals were victorious. Only one remaining ruler, Abul Hasan Qutb Shah (the Qutbshahi ruler of Golconda), refused to surrender. He and his servicemen fortified themselves at Golconda and fiercely protected the Kollur Mine, which was then probably the world's most productive diamond mine, and an important economic asset. In 1687, Aurangzeb led his grand Mughal army against the Deccan Qutbshahi fortress during the siege of Golconda. The Qutbshahis had constructed massive fortifications throughout successive generations on a granite hill over 400 ft high with an enormous eight-mile long wall enclosing the city. The main gates of Golconda had the ability to repulse any war elephant attack. Although the Qutbshahis maintained the impregnability of their walls, at night Aurangzeb and his infantry erected complex scaffolding that allowed them to scale the high walls. During the eight-month siege the Mughals faced many hardships including the death of their experienced commander Kilich Khan Bahadur. Eventually, Aurangzeb and his forces managed to penetrate the walls by capturing a gate, and their entry into the fort led Abul Hasan Qutb Shah to surrender peacefully. Military equipment. Mughal cannon making skills advanced during the 17th century. One of the most impressive Mughal cannons is known as the Zafarbaksh, which is a very rare "composite cannon", that required skills in both wrought-iron forge welding and bronze-casting technologies and the in-depth knowledge of the qualities of both metals. The "Ibrahim Rauza" was a famed cannon, which was well known for its multi-barrels. François Bernier, the personal physician to Aurangzeb, observed versatile Mughal gun-carriages each drawn by two horses. Despite these innovations, most soldiers used bows and arrows, the quality of sword manufacture was so poor that they preferred to use ones imported from England, and the operation of the cannons was entrusted not to Mughals but to European gunners. Other weapons used during the period included rockets, cauldrons of boiling oil, muskets and manjaniqs (stone-throwing catapults). Infantry who were later called Sepoy and who specialised in siege and artillery emerged during the reign of Aurangzeb. War elephants. In 1703, the Mughal commander at Coromandel, Daud Khan Panni spent 10,500 coins to purchase 30 to 50 war elephants from Ceylon. Art and culture. Aurangzeb was noted for his religious piety; he memorized the entire Quran, studied hadiths and stringently observed the rituals of Islam, and "transcribe[d] copies of the Quran." Aurangzeb had a more austere nature than his predecessors, and greatly reduced imperial patronage of the figurative Mughal miniature. This had the effect of dispersing the court atelier to other regional courts. Being religious he encouraged Islamic calligraphy. His reign also saw the building of the Lahore Badshahi Masjid and Bibi Ka Maqbara in Aurangabad for his wife Rabia-ud-Daurani. Aurangzeb was considered a "Mujaddid" by contemporary Muslims considered Aurangzeb. Calligraphy. The Mughal Emperor Aurangzeb is known to have patronised works of Islamic calligraphy; the demand for Quran manuscripts in the "naskh" style peaked during his reign. Having been instructed by Syed Ali Tabrizi, Aurangzeb was himself a talented calligrapher in "naskh", evidenced by Quran manuscripts that he created. Architecture. Aurangzeb was not as involved in architecture as his father. Under Aurangzeb's rule, the position of the Mughal Emperor as chief architectural patron began to diminish. However, Aurangzeb did endow some significant structures. Catherine Asher terms his architectural period as an "Islamization" of Mughal architecture. One of the earliest constructions after his accession was a small marble mosque known as the Moti Masjid (Pearl Mosque), built for his personal use in the Red Fort complex of Delhi. He later ordered the construction of the Badshahi Mosque in Lahore, which is today one of the largest mosques in the Indian subcontinent. The mosque he constructed in Srinagar is still the largest in Kashmir. Most of Aurangzeb's building activity revolved around mosques, but secular structures were not neglected. The Bibi Ka Maqbara in Aurangabad, the mausoleum of Rabia-ud-Daurani, was constructed by his eldest son Azam Shah upon Aurangzeb's decree. Its architecture displays clear inspiration from the Taj Mahal. Aurangzeb also provided and repaired urban structures like fortifications (for example a wall around Aurangabad, many of whose gates still survive), bridges, caravanserais, and gardens. Aurangzeb was more heavily involved in the repair and maintenance of previously existing structures. The most important of these were mosques, both Mughal and pre-Mughal, which he repaired more of than any of his predecessors. He patronised the "dargahs" of Sufi saints such as Bakhtiyar Kaki, and strived to maintain royal tombs. Textiles. The textile industry in the Mughal Empire emerged very firmly during the reign of the Mughal Emperor Aurangzeb and was particularly well noted by Francois Bernier, a French physician of the Mughal Emperor. Francois Bernier writes how "Karkanahs", or workshops for the artisans, particularly in textiles flourished by "employing hundreds of embroiderers, who were superintended by a master". He further writes how "Artisans manufacture of silk, fine brocade, and other fine muslins, of which are made turbans, robes of gold flowers, and tunics worn by females, so delicately fine as to wear out in one night, and cost even more if they were well embroidered with fine needlework". He also explains the different techniques employed to produce such complicated textiles such as "Himru" (whose name is Persian for "brocade"), "Paithani" (whose pattern is identical on both sides), "Mushru" (satin weave) and how "Kalamkari", in which fabrics are painted or block-printed, was a technique that originally came from Persia. Francois Bernier provided some of the first, impressive descriptions of the designs and the soft, delicate texture of Pashmina shawls also known as "Kani", which were very valued for their warmth and comfort among the Mughals, and how these textiles and shawls eventually began to find their way to France and England. Foreign relations. Aurangzeb sent diplomatic missions to Mecca in 1659 and 1662, with money and gifts for the Sharif. He also sent alms in 1666 and 1672 to be distributed in Mecca and Medina. Historian Naimur Rahman Farooqi writes that, "By 1694, Aurangzeb's ardour for the Sharifs of Mecca had begun to wane; their greed and rapacity had thoroughly disillusioned the Emperor ... Aurangzeb expressed his disgust at the unethical behavior of the Sharif who appropriated all the money sent to the Hijaz for his own use, thus depriving the needy and the poor." Relations with the Uzbek. Subhan Quli Khan, Balkh's Uzbek ruler was the first to recognise him in 1658 and requested for a general alliance, he worked alongside the new Mughal Emperor since 1647, when Aurangzeb was the Subedar of Balkh. Relations with the Safavid dynasty. Aurangzeb received the embassy of Abbas II of Persia in 1660 and returned them with gifts. However, relations between the Mughal Empire and the Safavid dynasty were tense because the Persians attacked the Mughal army positioned near Kandahar. Aurangzeb prepared his armies in the Indus River Basin for a counteroffensive, but Abbas II's death in 1666 caused Aurangzeb to end all hostilities. Aurangzeb's rebellious son, Sultan Muhammad Akbar, sought refuge with Suleiman I of Persia, who had rescued him from the Imam of Musqat and later refused to assist him in any military adventures against Aurangzeb. Relations with the French. In 1667, the French East India Company ambassadors Le Gouz and Bebert presented Louis XIV of France's letter which urged the protection of French merchants from various rebels in the Deccan. In response to the letter, Aurangzeb issued a "firman" allowing the French to open a factory in Surat. Relations with the Sultanate of Maldives. In the 1660s, the Sultan of the Maldives, Ibrahim Iskandar I, requested help from Aurangzeb's representative, the Faujdar of Balasore. The Sultan wished to gain his support in possible future expulsions of Dutch and English trading ships, as he was concerned with how they might impact the economy of the Maldives. However, as Aurangzeb did not possess a powerful navy and had no interest in providing support to Ibrahim in a possible future war with the Dutch or English, the request came to nothing. Relations with the Ottoman Empire. Like his father, Aurangzeb was not willing to acknowledge the Ottoman claim to the caliphate. He often supported the Ottoman Empire's enemies, extending cordial welcome to two rebel Governors of Basra, and granting them and their families a high status in the imperial service. Sultan Suleiman II's friendly postures were ignored by Aurangzeb. The Sultan urged Aurangzeb to wage holy war against Christians. Relations with the English and the Anglo-Mughal War. In 1686, the East India Company, which had unsuccessfully tried to obtain a "firman" that would grant them regular trading privileges throughout the Mughal Empire, initiated the Anglo-Mughal War. This war ended in disaster for the English after Aurangzeb in 1689 dispatched a large fleet from Janjira that blockaded Bombay. The ships, commanded by Sidi Yaqub, were manned by Indians and Mappilas. In 1690, realising the war was not going favourably for them, the Company sent envoys to Aurangzeb's camp to plead for a pardon. The company's envoys prostrated themselves before the emperor, agreed pay a large indemnity, and promise to refrain from such actions in the future. In September 1695, English pirate Henry Every conducted one of the most profitable pirate raids in history with his capture of a Grand Mughal grab convoy near Surat. The Indian ships had been returning home from their annual pilgrimage to Mecca when the pirate struck, capturing the "Ganj-i-Sawai", reportedly the largest ship in the Muslim fleet, and its escorts in the process. When news of the capture reached the mainland, a livid Aurangzeb nearly ordered an armed attack against the English-governed city of Bombay, though he finally agreed to compromise after the Company promised to pay financial reparations, estimated at £600,000 by the Mughal authorities. Meanwhile, Aurangzeb shut down four of the English East India Company's factories, imprisoned the workers and captains (who were nearly lynched by a rioting mob), and threatened to put an end to all English trading in India until Every was captured. The Lords Justices of England offered a bounty for Every's apprehension, leading to the first worldwide manhunt in recorded history. However, Every successfully eluded capture. In 1702, Aurangzeb sent Daud Khan Panni, the Mughal Empire's Subhedar of the Carnatic region, to besiege and blockade Fort St. George for more than three months. The governor of the fort Thomas Pitt was instructed by the East India Company to sue for peace. Relations with the Ethiopian Empire. Ethiopian Emperor Fasilides dispatched an embassy to India in 1664–65 to congratulate Aurangzeb upon his accession to the throne of the Mughal Empire. Relations with the Tibetans, Uyghurs, and Dzungars. After 1679, the Tibetans invaded Ladakh, which was in the Mughal sphere of influence. Aurangzeb intervened on Ladakh's behalf in 1683, but his troops retreated before Dzungar reinforcements arrived to bolster the Tibetan position. At the same time, however, a letter was sent from the governor of Kashmir claiming the Mughals had defeated the Dalai Lama and conquered all of Tibet, a cause for celebration in Aurangzeb's court. Aurangzeb received an embassy from Muhammad Amin Khan of Chagatai Moghulistan in 1690, seeking assistance in driving out "Qirkhiz infidels" (meaning the Buddhist Dzungars), who "had acquired dominance over the country". Relations with the Czardom of Russia. Russian Czar Peter the Great requested Aurangzeb to open Russo-Mughal trade relations in the late 17th century. In 1696 Aurangzeb received his envoy, Semyon Malenkiy, and allowed him to conduct free trade. After staying for six years in India, and visiting Surat, Burhanpur, Agra, Delhi and other cities, Russian merchants returned to Moscow with valuable Indian goods. Administrative reforms. Tribute. Aurangzeb received tribute from all over the Indian subcontinent, using this wealth to establish bases and fortifications in India, particularly in the Carnatic, Deccan, Bengal and Lahore. Revenue. Aurangzeb's exchequer raised a record £100 million in annual revenue through various sources like taxes, customs and land revenue, "et al." from 24 provinces. He had an annual yearly revenue of $450 million, more than ten times that of his contemporary Louis XIV of France. Coins. Aurangzeb felt that verses from the "Quran" should not be stamped on coins, as done in former times, because they were constantly touched by the hands and feet of people. His coins had the name of the mint city and the year of issue on one face, and, the following couplet on other: Rebellions. Traditional and newly coherent social groups in northern and western India, such as the Marathas, Rajputs, Hindu Jats, Pashtuns, and Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or opposition, gave them both recognition and military experience. Jat rebellion. In 1669, Hindu Jats began to organise a rebellion that is believed to have been caused by the re-imposition of "jizya" and destruction of Hindu temples in Mathura. The Jats were led by Gokula, a rebel landholder from Tilpat. By the year 1670 20,000 Jat rebels were quelled and the Mughal Army took control of Tilpat, Gokula's personal fortune amounted to 93,000 gold coins and hundreds of thousands of silver coins. Gokula was caught and executed. But the Jats once again attempted began their rebellion. Raja Ram Jat, in order to avenge his father Gokula's death, plundered Akbar's tomb of its gold, silver and fine carpets, opened Akbar's grave and dragged his bones and burned them in retaliation. Jats also shot off the tops of the minarets on the gateway to Akbar's Tomb and melted down two silver doors from the Taj Mahal. Aurangzeb appointed Mohammad Bidar Bakht as commander to crush the Jat rebellion. On 4 July 1688, Raja Ram Jat was captured and beheaded. His head was sent to Aurangzeb as proof. However, after Aurangeb's death, Jats under Badan Singh later established their independent state of Bharatpur. Mughal–Maratha Wars. In 1657, while Aurangzeb attacked Golconda and Bijapur in the Deccan, the Hindu Maratha warrior, Shivaji, used guerrilla tactics to take control of three Adil Shahi forts formerly under his father's command. With these victories, Shivaji assumed de facto leadership of many independent Maratha clans. The Marathas harried the flanks of the warring Adil Shahis, gaining weapons, forts, and territory. Shivaji's small and ill-equipped army survived an all out Adil Shahi attack, and Shivaji personally killed the Adil Shahi general, Afzal Khan. With this event, the Marathas transformed into a powerful military force, capturing more and more Adil Shahi territories. Shivaji went on to neutralise Mughal power in the region. In 1659, Aurangzeb sent his trusted general and maternal uncle Shaista Khan, the Wali in Golconda to recover forts lost to the Maratha rebels. Shaista Khan drove into Maratha territory and took up residence in Pune. But in a daring raid on the governor's palace in Pune during a midnight wedding celebration, led by Shivaji himself, the Marathas killed Shaista Khan's son and Shivaji maimed Shaista Khan by cutting off three fingers of his hand. Shaista Khan, however, survived and was re-appointed the administrator of Bengal going on to become a key commander in the war against the Ahoms. Aurangzeb next sent general Raja Jai Singh to vanquish the Marathas. Jai Singh besieged the fort of Purandar and fought off all attempts to relieve it. Foreseeing defeat, Shivaji agreed to terms. Jai Singh persuaded Shivaji to visit Aurangzeb at Agra, giving him a personal guarantee of safety. Their meeting at the Mughal court did not go well, however. Shivaji felt slighted at the way he was received, and insulted Aurangzeb by refusing imperial service. For this affront he was detained, but managed to effect a daring escape. Shivaji returned to the Deccan, and crowned himself "Chhatrapati" or the ruler of the Maratha Kingdom in 1674. Shivaji expanded Maratha control throughout the Deccan until his death in 1680. Shivaji was succeeded by his son, Sambhaji. Militarily and politically, Mughal efforts to control the Deccan continued to fail. On the other hand, Aurangzeb's third son Akbar left the Mughal court along with a few Muslim Mansabdar supporters and joined Muslim rebels in the Deccan. Aurangzeb in response moved his court to Aurangabad and took over command of the Deccan campaign. The rebels were defeated and Akbar fled south to seek refuge with Sambhaji, Shivaji's successor. More battles ensued, and Akbar fled to Persia and never returned. In 1689, Aurangzeb's forces captured and executed Sambhaji. His successor Rajaram, later Rajaram's widow Tarabai and their Maratha forces fought individual battles against the forces of the Mughal Empire. Territory changed hands repeatedly during the years (1689–1707) of interminable warfare . As there was no central authority among the Marathas, Aurangzeb was forced to contest every inch of territory, at great cost in lives and money. Even as Aurangzeb drove west, deep into Maratha territory – notably conquering Satara — the Marathas expanded eastwards into Mughal lands – Malwa and Hyderabad. The Marathas also expanded further South into Southern India defeating the independent local rulers there capturing Jinji in Tamil Nadu. Aurangzeb waged continuous war in the Deccan for more than two decades with no resolution. He thus lost about a fifth of his army fighting rebellions led by the Marathas in Deccan India. He travelled a long distance to the Deccan to conquer the Marathas and eventually died at the age of 88, still fighting the Marathas. Aurangzeb's shift from conventional warfare to anti-insurgency in the Deccan region shifted the paradigm of Mughal military thought. There were conflicts between Marathas and Mughals in Pune, Jinji, Malwa and Vadodara. The Mughal Empire's port city of Surat was sacked twice by the Marathas during the reign of Aurangzeb and the valuable port was in ruins. Matthew White estimates that about 2.5 million of Aurangzeb's army were killed during the Mughal–Maratha Wars (100,000 annually during a quarter-century), while 2 million civilians in war-torn lands died due to drought, plague and famine. Ahom campaign. While Aurangzeb and his brother Shah Shuja had been fighting against each other, the Hindu rulers of Kuch Behar and Assam took advantage of the disturbed conditions in the Mughal Empire, had invaded imperial dominions. For three years they were not attacked, but in 1660 Mir Jumla II, the viceroy of Bengal, was ordered to recover the lost territories. The Mughals set out in November 1661. Within weeks they occupied the capital of Kuch Behar, which they annexed. Leaving a detachment to garrison it, the Mughal army began to retake their territories in Assam. Mir Jumla II advanced on Garhgaon, the capital of the Ahom kingdom, and reached it on 17 March 1662. The ruler, Raja Sutamla, had fled before his approach. The Mughals captured 82 elephants, 300,000 rupees in cash, 1000 ships, and 173 stores of rice. On his way back to Dacca, in March 1663, Mir Jumla II died of natural causes. Skirmishes continued between the Mughals and Ahoms after the rise of Chakradhwaj Singha, who refused to pay further indemnity to the Mughals and during the wars that continued the Mughals suffered great hardships. Munnawar Khan emerged as a leading figure and is known to have supplied food to vulnerable Mughal forces in the region near Mathurapur. Although the Mughals under the command of Syed Firoz Khan the Faujdar at Guwahati were overrun by two Ahom armies in 1667, but they continued to hold and maintain presence in their eastern territories even after the battle of Saraighat in 1671. The battle of Saraighat was fought in 1671 between the Mughal empire (led by the Kachwaha king, Raja Ramsingh I), and the Ahom Kingdom (led by Lachit Borphukan) on the Brahmaputra river at Saraighat, now in Guwahati. Although much weaker, the Ahom Army defeated the Mughal Army by brilliant uses of the terrain, clever diplomatic negotiations to buy time, guerrilla tactics, psychological warfare, military intelligence and by exploiting the sole weakness of the Mughal forces—its navy. The battle of Saraighat was the last battle in the last major attempt by the Mughals to extend their empire into Assam. Though the Mughals managed to regain Guwahati briefly after a later Borphukan deserted it, the Ahoms wrested control in the battle of Itakhuli in 1682 and maintained it till the end of their rule. Satnami opposition. In May 1672, the Satnami sect obeying the commandments of an "old toothless woman" (according to Mughal accounts) organised a massive revolt in the agricultural heartlands of the Mughal Empire. The Satnamis were known to have shaved off their heads and even eyebrows and had temples in many regions of Northern India. They began a large-scale rebellion 75 miles southwest of Delhi. The Satnamis believed they were invulnerable to Mughal bullets and believed they could multiply in any region they entered. The Satnamis initiated their march upon Delhi and overran small-scale Mughal infantry units. Aurangzeb responded by organising a Mughal army of 10,000 troops and artillery, and dispatched detachments of his own personal Mughal imperial guards to carry out several tasks. To boost Mughal morale, Aurangzeb wrote Islamic prayers, made amulets, and drew designs that would become emblems in the Mughal Army. This rebellion would have a serious aftermath effect on the Punjab. Sikh opposition. The ninth Sikh Guru, Guru Tegh Bahadur, like his predecessors was opposed to forced conversion of the local population as he considered it wrong. Approached by Kashmiri Pandits to help them retain their faith and avoid forced religious conversions, Guru Tegh Bahadur sent a message to the emperor that if he could convert Teg Bagadur to Islam, every Hindu will become a Muslim. In response, Aurangzeb ordered arrest of the Guru. He was then brought to Delhi and tortured so as to convert him. On his refusal to convert, he was beheaded in 1675. In response, Guru Tegh Bahadur's son and successor, Guru Gobind Singh, further militarised his followers, starting with the establishment of Khalsa in 1699, eight years before Aurangzeb's death. In 1705, Guru Gobind Singh sent a letter entitled "Zafarnamah", which accused Aurangzeb of cruelty and betraying Islam. The letter caused him much distress and remorse. Guru Gobind Singh's formation of Khalsa in 1699 led to the establishment of the Sikh Confederacy and later Sikh Empire. Pashtun opposition. The Pashtun revolt in 1672 under the leadership of the warrior poet Khushal Khan Khattak of Kabul, was triggered when soldiers under the orders of the Mughal Governor Amir Khan allegedly molested women of the Pashtun tribes in modern-day Kunar Province of Afghanistan. The Safi tribes retaliated against the soldiers. This attack provoked a reprisal, which triggered a general revolt of most of tribes. Attempting to reassert his authority, Amir Khan led a large Mughal Army to the Khyber Pass, where the army was surrounded by tribesmen and routed, with only four men, including the Governor, managing to escape. Aurangzeb's incursions into the Pashtun areas were described by Khushal Khan Khattak as "Black is the Mughal's heart towards all of us Pathans". Aurangzeb employed the scorched earth policy, sending soldiers who massacred, looted and burnt many villages. Aurangzeb also proceeded to use bribery to turn the Pashtun tribes against each other, with the aim that they would distract a unified Pashtun challenge to Mughal authority, and the impact of this was to leave a lasting legacy of mistrust among the tribes. After that the revolt spread, with the Mughals suffering a near total collapse of their authority in the Pashtun belt. The closure of the important Attock-Kabul trade route along the Grand Trunk road was particularly disastrous. By 1674, the situation had deteriorated to a point where Aurangzeb camped at Attock to personally take charge. Switching to diplomacy and bribery along with force of arms, the Mughals eventually split the rebels and partially suppressed the revolt, although they never managed to wield effective authority outside the main trade route. Death. By 1689, the conquest of Golconda, Mughal victories in the south expanded the Mughal Empire to 4 million square kilometres, with a population estimated to be over 158 million. But this supremacy was short-lived. Jos Gommans, Professor of Colonial and Global History at the University of Leiden, says that "... the highpoint of imperial centralisation under emperor Aurangzeb coincided with the start of the imperial downfall." Aurangzeb constructed a small marble mosque known as the Moti Masjid (Pearl Mosque) in the Red Fort complex in Delhi. However, his constant warfare, especially with the Marathas, drove his empire to the brink of bankruptcy just as much as the wasteful personal spending and opulence of his predecessors. The Indologist Stanley Wolpert, emeritus professor at UCLA, says that: Even when ill and dying, Aurangzeb made sure that the populace knew he was still alive, for if they had thought otherwise then the turmoil of another war of succession was likely. He died at his military camp in Bhingar near Ahmednagar on 3 March 1707 at the age of 88, having outlived many of his children. He had only 300 rupees with him which were later given to charity as per his instructions and he prior to his death requested not to spend extravagantly on his funeral but to keep it simple. His modest open-air grave in Khuldabad, Aurangabad, Maharashtra expresses his deep devotion to his Islamic beliefs. It is sited in the courtyard of the shrine of the Sufi saint Shaikh Burhan-u'd-din Gharib, who was a disciple of Nizamuddin Auliya of Delhi. Brown writes that after his death, "a string of weak emperors, wars of succession, and coups by noblemen heralded the irrevocable weakening of Mughal power". She notes that the populist but "fairly old-fashioned" explanation for the decline is that there was a reaction to Aurangzeb's oppression. Although Aurangzeb died without appointing a successor, he instructed his three sons to divide the empire among themselves. His sons failed to reach a satisfactory agreement and fought against each other in a war of succession. Aurangzeb's immediate successor was his third son Azam Shah, who was defeated and killed in June 1707 at the battle of Jajau by the army of Bahadur Shah I, the second son of Aurangzeb. Both because of Aurangzeb's over-extension and because of Bahadur Shah's weak military and leadership qualities, entered a period of terminal decline. Immediately after Bahadur Shah occupied the throne, the Maratha Empire – which Aurangzeb had held at bay, inflicting high human and monetary costs even on his own empire – consolidated and launched effective invasions of Mughal territory, seizing power from the weak emperor. Within decades of Aurangzeb's death, the Mughal Emperor had little power beyond the walls of Delhi. Assessments and legacy. Aurangzeb's rule has been the subject of both praise and controversy. During his lifetime, victories in the south expanded the Mughal Empire to 4 million square kilometres, and he ruled over a population estimated to be over 158 million subjects. His critics argue that his ruthlessness and religious bigotry made him unsuitable to rule the mixed population of his empire. Some critics assert that the persecution of Shias, Sufis and non-Muslims to impose practices of orthodox Islamic state, such as imposition of sharia and "jizya" religious tax on non-Muslims, doubling of custom duties on Hindus while abolishing it for Muslims, executions of Muslims and non-Muslims alike, and destruction of temples eventually led to numerous rebellions. G. N. Moin Shakir and Sarma Festschrift argue that he often used political opposition as pretext for religious persecution, and that, as a result, groups of Jats, Marathas, Sikhs, Satnamis and Pashtuns rose against him. Multiple interpretations of Aurangzeb's life and reign over the years by critics have led to a very complicated legacy. Some argue that his policies abandoned his predecessors' legacy of pluralism and religious tolerance, citing his introduction of the "jizya" tax and other policies based on Islamic ethics; his demolition of Hindu temples; the executions of his elder brother Dara Shikoh, King Sambhaji of Maratha and Sikh Guru Tegh Bahadur and the prohibition and supervision of behaviour and activities that are forbidden in Islam such as gambling, fornication, and consumption of alcohol and narcotics. At the same time, some historians question the historical authenticity of the claims of his critics, arguing that his destruction of temples has been exaggerated, and noting that he built more temples than he destroyed, paid for their maintenance, employed significantly more Hindus in his imperial bureaucracy than his predecessors, and opposed bigotry against Hindus and Shia Muslims. In Pakistan, author Haroon Khalid writes that, "Aurangzeb is presented as a hero who fought and expanded the frontiers of the Islamic empire" and "is imagined to be a true believer who removed corrupt practices from religion and the court, and once again purified the empire." The academic Munis Faruqui also opines that the "Pakistani state and its allies in the religious and political establishments include him in the pantheon of premodern Muslim heroes, especially lauding him for his militarism, personal piety, and seeming willingness to accommodate Islamic morality within state goals." Muhammad Iqbal, considered the spiritual founder of Pakistan, compared him favorably to the prophet Abraham for his warfare against Akbar's "Din-i Ilahi" and idolatry, while Iqbal Singh Sevea, in his book on the political philosophy of the thinker, says that "Iqbal considered that the life and activities of Aurangzeb constituted the starting point of Muslim nationality in India." Maulana Shabbir Ahmad Usmani, in his funeral oration, hailed M.A. Jinnah, the founder of Pakistan, to be the greatest Muslim since Aurangzeb. Pakistani-American academic Akbar Ahmed described President Zia-ul-Haq, known for his Islamization drive, as "conceptually ... a spiritual descendent of Aurangzeb" because Zia had an orthodox, legalistic view of Islam. Beyond the individual appreciations, Aurangzeb is seminal to Pakistan's national self-consciousness, as historian Ayesha Jalal, while referring to the Pakistani textbooks controversy, mentions M. D. Zafar's "A Text Book of Pakistan Studies" where we can read that, under Aurangzeb, "Pakistan spirit gathered in strength", while his death "weakened the Pakistan spirit." Another historian from Pakistan, Mubarak Ali, also looking at the textbooks, and while noting that Akbar "is conveniently ignored and not mentioned in any school textbook from class one to matriculation", contrasts him with Aurangzeb, who "appears in different textbooks of Social Studies and Urdu language as an orthodox and pious Muslim copying the Holy Quran and sewing caps for his livelihood." This image of Aurangzeb is not limited to Pakistan's official historiography. As of 2015, about 177 towns and villages of India have been named after Aurangzeb. Historian Audrey Truschke points out that Bharatiya Janta Party (BJP), Hindutva proponents and some others outside Hindutva ideology regard Aurangzeb as Muslim zealot in India. Jawaharlal Nehru wrote that, due to his reversal of the cultural and religious syncretism of the previous Mughal emperors, Aurangzeb acted "more as a Moslem than an Indian ruler", while Mahatma Gandhi was of the view that there was greater degree of freedom under Mughal rule than the British rule and asks that "in Aurangzeb's time a Shivaji could flourish. Has one hundred and fifty years of the British rule produced any Pratap and Shivaji?" Full title. The epithet Aurangzeb means 'Ornament of the Throne'. His chosen title Alamgir translates to Conqueror of the World. Aurangzeb's full imperial title was: "Al-Sultan al-Azam wal Khaqan al-Mukarram Hazrat Abul Muzaffar Muhy-ud-Din Muhammad Aurangzeb Bahadur Alamgir I", "Badshah Ghazi", "Shahanshah-e-Sultanat-ul-Hindiya Wal Mughaliya". Aurangzeb had also been attributed various other titles including "Caliph of The Merciful", "Monarch of Islam", and "Living Custodian of God". Literature. Aurangzeb has prominently featured in the following books
2427
Alexandrine
Alexandrine is a name used for several distinct types of verse line with related metrical structures, most of which are ultimately derived from the classical French alexandrine. The line's name derives from its use in the Medieval French "Roman d'Alexandre" of 1170, although it had already been used several decades earlier in "Le Pèlerinage de Charlemagne". The foundation of most alexandrines consists of two hemistichs (half-lines) of six syllables each, separated by a caesura (a metrical pause or word break, which may or may not be realized as a stronger syntactic break): o o o o o o | o o o o o o o=any syllable; |=caesura However, no tradition remains this simple. Each applies additional constraints (such as obligatory stress or nonstress on certain syllables) and options (such as a permitted or required additional syllable at the end of one or both hemistichs). Thus a line that is metrical in one tradition may be unmetrical in another. Where the alexandrine has been adopted, it has frequently served as the heroic verse form of that language or culture, English being a notable exception. Scope of the term. The term "alexandrine" may be used with greater or lesser rigour. Peureux suggests that only French syllabic verse with a 6+6 structure is, strictly speaking, an alexandrine. Preminger "et al". allow a broader scope: "Strictly speaking, the term 'alexandrine' is appropriate to French syllabic meters, and it may be applied to other metrical systems only where they too espouse syllabism as their principle, introduce phrasal accentuation, or rigorously observe the medial caesura, as in French." Common usage within the literatures of European languages is broader still, embracing lines syllabic, accentual-syllabic, and (inevitably) stationed ambivalently between the two; lines of 12, 13, or even 14 syllables; lines with obligatory, predominant, and optional caesurae. French. Although alexandrines occurred in French verse as early as the 12th century, they were slightly looser rhythmically, and vied with the "décasyllabe" and "octosyllabe" for cultural prominence and use in various genres. "The alexandrine came into its own in the middle of the sixteenth century with the poets of the Pléiade and was firmly established in the seventeenth century." It became the preferred line for the prestigious genres of epic and tragedy. The structure of the classical French alexandrine is o o o o o S | o o o o o S (e) S=stressed syllable; (e)=optional "mute e" Classical alexandrines are always rhymed, often in couplets alternating masculine rhymes and feminine rhymes, though other configurations (such as quatrains and sonnets) are also common. Victor Hugo began the process of loosening the strict two-hemistich structure. While retaining the medial caesura, he often reduced it to a mere word-break, creating a three-part line ("alexandrin ternaire") with this structure: o o o S | o o ¦ o S | o o o S (e) |=strong caesura; ¦=word break The Symbolists further weakened the classical structure, sometimes eliminating any or all of these caesurae. However, at no point did the newer line "replace" the older; rather, they were used concurrently, often in the same poem. This loosening process eventually led to "vers libéré" and finally to "vers libre". English. In English verse, "alexandrine" is typically used to mean "iambic hexameter": /="ictus", a strong syllabic position; ×="nonictus" ¦=often a mandatory or predominant caesura, but depends upon the author Whereas the French alexandrine is syllabic, the English is accentual-syllabic; and the central caesura (a defining feature of the French) is not always rigidly preserved in English. Though English alexandrines have occasionally provided the sole metrical line for a poem, for example in lyric poems by Henry Howard, Earl of Surrey and Sir Philip Sidney, and in two notable long poems, Michael Drayton's "Poly-Olbion" and Robert Browning's "Fifine at the Fair", they have more often featured alongside other lines. During the Middle Ages they typically occurred with heptameters (seven-beat lines), both exhibiting metrical looseness. Around the mid-16th century stricter alexandrines were popular as the first line of poulter's measure couplets, fourteeners (strict iambic heptameters) providing the second line. The strict English alexandrine may be exemplified by a passage from "Poly-Olbion", which features a rare caesural enjambment (symbolized codice_1) in the first line: <poem style="margin-left:2em"> Ye sacred Bards, that to ¦ your harps' melodious strings Sung Heroes' deeds (the monuments of Kings) And in your dreadful verse the prophecies, The agèd world's descents, and genealogies; (lines 31-34) </poem> The Faerie Queene by Edmund Spenser, with its stanzas of eight iambic pentameter lines followed by one alexandrine, exemplifies what came to be its chief role: as a somewhat infrequent variant line in an otherwise iambic pentameter context. Alexandrines provide occasional variation in the blank verse of William Shakespeare and his contemporaries (but rarely; they constitute only about 1% of Shakespeare's blank verse). John Dryden and his contemporaries and followers likewise occasionally employed them as the second (rarely the first) line of heroic couplets, or even more distinctively as the third line of a triplet. In his "Essay on Criticism", Alexander Pope denounced (and parodied) the excessive and unskillful use of this practice: <poem style="margin-left:2em"> Then at the last and only couplet fraught With some unmeaning thing they call a thought, A needless Alexandrine ends the song, That, like a wounded snake, drags its slow length along. (lines 354-357) </poem> Other languages. Spanish. The Spanish "verso alejandrino" is a line of 7+7 syllables, probably developed in imitation of the French alexandrine. Its structure is: o o o o o S o | o o o o o S o It was used beginning about 1200 for "mester de clerecía" (clerical verse), typically occurring in the "cuaderna vía", a stanza of four "alejandrinos" all with a single end-rhyme. The "alejandrino" was most prominent during the 13th and 14th centuries, after which time it was eclipsed by the metrically more flexible "arte mayor". Juan Ruiz's Book of Good Love is one of the best-known examples of "cuaderna vía", though other verse forms also appear in the work. Dutch. The mid-16th-century poet Jan van der Noot pioneered syllabic Dutch alexandrines on the French model, but within a few decades Dutch alexandrines had been transformed into strict iambic hexameters with a caesura after the third foot. From Holland the accentual-syllabic alexandrine spread to other continental literatures. German. Similarly, in early 17th-century Germany, Georg Rudolf Weckherlin advocated for an alexandrine with free rhythms, reflecting French practice; whereas Martin Opitz advocated for a strict accentual-syllabic iambic alexandrine in imitation of contemporary Dutch practice — and German poets followed Opitz. The alexandrine (strictly iambic with a consistent medial caesura) became the dominant long line of the German baroque. Polish. Unlike many similar lines, the Polish alexandrine developed not from French verse but from Latin, specifically, the 13-syllable goliardic line: Latin goliardic: o o o s S s s | o o o s S s Polish alexandrine: o o o o o S s | o o o s S s s=unstressed syllable Though looser instances of this (nominally) 13-syllable line were occasionally used in Polish literature, it was Mikołaj Rej and Jan Kochanowski who, in the 16th century, introduced the syllabically strict line as a vehicle for major works. Czech. The Czech alexandrine is a comparatively recent development, based on the French alexandrine and introduced by Karel Hynek Mácha in the 19th century. Its structure forms a halfway point between features usual in syllabic and in accentual-syllabic verse, being more highly constrained than most syllabic verse, and less so than most accentual-syllabic verse. Moreover, it equally encourages the very different rhythms of iambic hexameter and dactylic tetrameter to emerge by preserving the constants of both measures: iambic hexameter: s S s S s S | s S s S s S (s) dactylic tetrameter: S s s S s s | S s s S s s (s) Czech alexandrine: o o s S s o | o o s S s o (s) Hungarian. Hungarian metrical verse may be written either syllabically (the older and more traditional style, known as "national") or quantitatively. One of the national lines has a 6+6 structure: o o o o o o | o o o o o o Although deriving from native folk versification, it is possible that this line, and the related 6-syllable line, were influenced by Latin or Romance examples. When employed in 4-line or 8-line stanzas and riming in couplets, this is called the Hungarian alexandrine; it is the Hungarian heroic verse form. Beginning with the 16th-century verse of Bálint Balassi, this became the dominant Hungarian verseform. Modern references. In the comic book "Asterix and Cleopatra", the author Goscinny inserted a pun about alexandrines: when the Druid Panoramix ("Getafix" in the English translation) meets his Alexandrian (Egyptian) friend the latter exclaims "Je suis, mon cher ami, || très heureux de te voir" at which Panoramix observes "C'est un Alexandrin" ("That's an alexandrine!"/"He's an Alexandrian!"). The pun can also be heard in the theatrical adaptations. The English translation renders this as "My dear old Getafix || I hope I find you well", with the reply "An Alexandrine".
2428
Analog computer
An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities ("analog signals") to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers. Precursors. This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of science Derek J. de Solla Price. It was discovered in 1901, in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to , during the Hellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, . The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. A number of similar systems followed, notably those of the Spanish engineer Leonardo Torres Quevedo, who built several machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Modern era. The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912, Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I. Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s. World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system ("mixing device") to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, the Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands, Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into the Deltar, a hydraulic analogy computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works). The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program. The MONIAC Computer was a hydraulic analogy of a national economy first unveiled in 1949. Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi. Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published in "Practical Electronics" in the January 1968 edition. Another more modern hybrid computer design was published in "Everyday Practical Electronics" in 2002. An example described in the EPE hybrid computer was the flight of a VTOL aircraft such as the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. Electronic analog computers. The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty. By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation formula_1, with formula_2 as the vertical position of a mass formula_3, formula_4 the damping coefficient, formula_5 the spring constant and formula_6 the gravity of Earth. For analog computing, the equation is programmed as formula_7. The equivalent analog circuit consists of two integrators for the state variables formula_8 (speed) and formula_2 (position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.) The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as: Analog–digital hybrids. Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers in embedded systems. In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters. The largest manufacturer of hybrid computers was Electronics Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts. Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft. After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Today there are no more big hybrid computers, but only hybrid components. Implementations. Mechanical analog computers. While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities. Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System. Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms. For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them. Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery. Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side. At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of "1" the distance from the vertex, and "2" the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle. Although they did not accomplish any computation, electromechanical position servos were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers. Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators. Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984). Components. Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos. Key electrical/electronic components might include: The core mathematical operations used in an electric analog computer are: In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. Limitations. In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit. Decline. In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still popular among aircraft personnel. Resurgence. With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits. Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers. Practical examples. These are examples of analog computers that have been constructed or practically used: Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry.
2429
Audio
Audio most commonly refers to sound, as it is transmitted in signal form. It may also refer to:
2431
Minute and second of arc
A minute of arc, arcminute (arcmin), arc minute, or minute arc, denoted by the symbol , is a unit of angular measurement equal to of one degree. Since one degree is of a turn (or complete rotation), one arcminute is of a turn. The nautical mile (nmi) was originally defined as the arc length of a minute of latitude on a spherical Earth, so the actual Earth circumference is very near . A minute of arc is of a radian. A second of arc, arcsecond (arcsec), or arc second, denoted by the symbol , is of an arcminute, of a degree, of a turn, and (about ) of a radian. These units originated in Babylonian astronomy as sexagesimal subdivisions of the degree; they are used in fields that involve very small angles, such as astronomy, optometry, ophthalmology, optics, navigation, land surveying, and marksmanship. To express even smaller angles, standard SI prefixes can be employed; the milliarcsecond (mas) and microarcsecond (μas), for instance, are commonly used in astronomy. For a three-dimensional area such as on a sphere, "square arcminutes" or "seconds" may be used. Symbols and abbreviations. The prime symbol () designates the arcminute, though a single quote (U+0027) is commonly used where only ASCII characters are permitted. One arcminute is thus written as 1′. It is also abbreviated as arcmin or amin. Similarly, double prime (U+2033) designates the arcsecond, though a double quote (U+0022) is commonly used where only ASCII characters are permitted. One arcsecond is thus written as 1″. It is also abbreviated as arcsec or asec. In celestial navigation, seconds of arc are rarely used in calculations, the preference usually being for degrees, minutes, and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS receivers, which normally display latitude and longitude in the latter format by default. Common examples. The average apparent diameter of the full Moon is about 31 arcminutes, or 0.52°. One arcminute is the approximate resolution of the human eye. One arcsecond is the approximate angle subtended by a U.S. dime coin (18 mm) at a distance of . An arcsecond is also the angle subtended by One milliarcsecond is about the size of a half dollar, seen from a distance equal to that between the Washington Monument and the Eiffel Tower. One microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth. One nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth. Also notable examples of size in arcseconds are: History. The concepts of degrees, minutes, and seconds—as they relate to the measure of both angles and time—derive from Babylonian astronomy and time-keeping. Influenced by the Sumerians, the ancient Babylonians divided the Sun's perceived motion across the sky over the course of one full day into 360 degrees. Each degree was subdivided into 60 minutes and each minute into 60 seconds. Thus, one Babylonian degree was equal to four minutes in modern terminology, one Babylonian minute to four modern seconds, and one Babylonian second to (approximately 0.067) of a modern second. Uses. Astronomy. Since antiquity, the arcminute and arcsecond have been used in astronomy: in the ecliptic coordinate system as latitude (β) and longitude (λ); in the horizon system as altitude (Alt) and azimuth (Az); and in the equatorial coordinate system as declination (δ). All are measured in degrees, arcminutes, and arcseconds. The principal exception is right ascension (RA) in equatorial coordinates, which is measured in time units of hours, minutes, and seconds. Contrary to what one might assume, minutes and seconds of arc do not directly relate to minutes and seconds of time, in either the rotational frame of the Earth around its own axis (day), or the Earth's rotational frame around the Sun (year). The Earth's rotational rate around its own axis is 15 minutes of arc per minute of time (360 degrees / 24 hours in day); the Earth's rotational rate around the Sun (not entirely constant) is roughly 24 minutes of time per minute of arc (from 24 hours in day), which tracks the annual progression of the Zodiac. Both of these factor in what astronomical objects you can see from surface telescopes (time of year) and when you can best see them (time of day), but neither are in unit correspondence. For simplicity, the explanations given assume a degree/day in the Earth's annual rotation around the Sun, which is off by roughly 1%. The same ratios hold for seconds, due to the consistent factor of 60 on both sides. The arcsecond is also often used to describe small astronomical angles such as the angular diameters of planets (e.g. the angular diameter of Venus which varies between 10″ and 60″); the proper motion of stars; the separation of components of binary star systems; and parallax, the small change of position of a star or solar system body as the Earth revolves about the Sun. These small angles may also be written in milliarcseconds (mas), or thousandths of an arcsecond. The unit of distance called the parsec, abbreviated from the parallax angle of one arc second, was developed for such parallax measurements. The distance from the Sun to a celestial object is the reciprocal of the angle, measured in arcseconds, of the object's apparent movement caused by parallax. The European Space Agency's astrometric satellite Gaia, launched in 2013, can approximate star positions to 7 microarcseconds (µas). Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05″. Because of the effects of atmospheric blurring, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5″; in poor conditions this increases to 1.5″ or even more. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1″. Space telescopes are not affected by the Earth's atmosphere but are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05″ on a 10 m class telescope. Cartography. Minutes (′) and seconds (″) of arc are also used in cartography and navigation. At sea level one minute of arc along the equator equals exactly one geographical mile along the Earth's equator or approximately . A second of arc, one sixtieth of this amount, is roughly . The exact distance varies along meridian arcs or any other great circle arcs because the figure of the Earth is slightly oblate (bulges a third of a percent at the equator). Positions are traditionally given using degrees, minutes, and seconds of arcs for latitude, the arc north or south of the equator, and for longitude, the arc east or west of the Prime Meridian. Any position on or above the Earth's reference ellipsoid can be precisely given with this method. However, when it is inconvenient to use base-60 for minutes and seconds, positions are frequently expressed as decimal fractional degrees to an equal amount of precision. Degrees given to three decimal places ( of a degree) have about the precision of degrees-minutes-seconds ( of a degree) and specify locations within about . For navigational purposes positions are given in degrees and decimal minutes, for instance The Needles lighthouse is at 50º 39.734’N 001º 35.500’W. Property cadastral surveying. Related to cartography, property boundary surveying using the metes and bounds system and cadastral surveying relies on fractions of a degree to describe property lines' angles in reference to cardinal directions. A boundary "mete" is described with a beginning reference point, the cardinal direction North or South followed by an angle less than 90 degrees and a second cardinal direction, and a linear distance. The boundary runs the specified linear distance from the beginning point, the direction of the distance being determined by rotating the first cardinal direction the specified angle toward the second cardinal direction. For example, "North 65° 39′ 18″ West 85.69 feet" would describe a line running from the starting point 85.69 feet in a direction 65° 39′ 18″ (or 65.655°) away from north toward the west. Firearms. The arcminute is commonly found in the firearms industry and literature, particularly concerning the precision of rifles, though the industry refers to it as minute of angle (MOA). It is especially popular as a unit of measurement with shooters familiar with the imperial measurement system because 1 MOA subtends a circle with a diameter of 1.047 inches (which is often rounded to just 1 inch) at 100 yards ( at or 2.908 cm at 100 m), a traditional distance on American target ranges. The subtension is linear with the distance, for example, at 500 yards, 1 MOA subtends 5.235 inches, and at 1000 yards 1 MOA subtends 10.47 inches. Since many modern telescopic sights are adjustable in half (), quarter () or eighth () MOA increments, also known as "clicks", zeroing and adjustments are made by counting 2, 4 and 8 clicks per MOA respectively. For example, if the point of impact is 3 inches high and 1.5 inches left of the point of aim at 100 yards (which for instance could be measured by using a spotting scope with a calibrated reticle), the scope needs to be adjusted 3 MOA down, and 1.5 MOA right. Such adjustments are trivial when the scope's adjustment dials have a MOA scale printed on them, and even figuring the right number of clicks is relatively easy on scopes that "click" in fractions of MOA. This makes zeroing and adjustments much easier: Another common system of measurement in firearm scopes is the milliradian (mrad). Zeroing an mrad based scope is easy for users familiar with base ten systems. The most common adjustment value in mrad based scopes is  mrad (which approximates MOA). One thing to be aware of is that some MOA scopes, including some higher-end models, are calibrated such that an adjustment of 1 MOA on the scope knobs corresponds to exactly 1 inch of impact adjustment on a target at 100 yards, rather than the mathematically correct 1.047 inches. This is commonly known as the Shooter's MOA (SMOA) or Inches Per Hundred Yards (IPHY). While the difference between one true MOA and one SMOA is less than half of an inch even at 1000 yards, this error compounds significantly on longer range shots that may require adjustment upwards of 20–30 MOA to compensate for the bullet drop. If a shot requires an adjustment of 20 MOA or more, the difference between true MOA and SMOA will add up to 1 inch or more. In competitive target shooting, this might mean the difference between a hit and a miss. The physical group size equivalent to "m" minutes of arc can be calculated as follows: group size = tan() × distance. In the example previously given, for 1 minute of arc, and substituting 3,600 inches for 100 yards, 3,600 tan() ≈ 1.047 inches. In metric units 1 MOA at 100 metres ≈ 2.908 centimetres. Sometimes, a precision-oriented firearm's performance will be measured in MOA. This simply means that under ideal conditions (i.e. no wind, high-grade ammo, clean barrel, and a stable mounting platform such as a vise or a benchrest used to eliminate shooter error), the gun is capable of producing a group of shots whose center points (center-to-center) fit into a circle, the average diameter of circles in several groups can be subtended by that amount of arc. For example, a "1 MOA rifle" should be capable, under ideal conditions, of repeatably shooting 1-inch groups at 100 yards. Most higher-end rifles are warrantied by their manufacturer to shoot under a given MOA threshold (typically 1 MOA or better) with specific ammunition and no error on the shooter's part. For example, Remington's M24 Sniper Weapon System is required to shoot 0.8 MOA or better, or be rejected from sale by quality control. Rifle manufacturers and gun magazines often refer to this capability as "sub-MOA", meaning a gun consistently shooting groups under 1 MOA. This means that a single group of 3 to 5 shots at 100 yards, or the average of several groups, will measure less than 1 MOA between the two furthest shots in the group, i.e. all shots fall within 1 MOA. If larger samples are taken (i.e., more shots per group) then group size typically increases, however this will ultimately average out. If a rifle was truly a 1 MOA rifle, it would be just as likely that two consecutive shots land exactly on top of each other as that they land 1 MOA apart. For 5-shot groups, based on 95% confidence, a rifle that normally shoots 1 MOA can be expected to shoot groups between 0.58 MOA and 1.47 MOA, although the majority of these groups will be under 1 MOA. What this means in practice is if a rifle that shoots 1-inch groups on average at 100 yards shoots a group measuring 0.7 inches followed by a group that is 1.3 inches, this is not statistically abnormal. The metric system counterpart of the MOA is the milliradian (mrad or 'mil'), being equal to of the target range, laid out on a circle that has the observer as centre and the target range as radius. The number of milliradians on a full such circle therefore always is equal to 2 × × 1000, regardless the target range. Therefore, 1 MOA ≈ 0.2909 mrad. This means that an object which spans 1 mrad on the reticle is at a range that is in metres equal to the object's size in millimetres (e.g. an object of 100 mm subtending 1 mrad is 100 metres away). So there is no conversion factor required, contrary to the MOA system. A reticle with markings (hashes or dots) spaced with a one mrad apart (or a fraction of a mrad) are collectively called a mrad reticle. If the markings are round they are called mil-dots. In the table below conversions from mrad to metric values are exact (e.g. 0.1 mrad equals exactly 10 mm at 100 metres), while conversions of minutes of arc to both metric and imperial values are approximate. Human vision. In humans, 20/20 vision is the ability to resolve a spatial pattern separated by a visual angle of one minute of arc, from a distance of twenty feet. A 20/20 letter subtends 5 minutes of arc total. Materials. The deviation from parallelism between two surfaces, for instance in optical engineering, is usually measured in arcminutes or arcseconds. In addition, arcseconds are sometimes used in rocking curve (ω-scan) x ray diffraction measurements of high-quality epitaxial thin films. Manufacturing. Some measurement devices make use of arcminutes and arcseconds to measure angles when the object being measured is too small for direct visual inspection. For instance, a toolmaker's optical comparator will often include an option to measure in "minutes and seconds".
2433
Alberto Giacometti
Alberto Giacometti (, , ; 10 October 1901 – 11 January 1966) was a Swiss sculptor, painter, draftsman and printmaker. Beginning in 1922, he lived and worked mainly in Paris but regularly visited his hometown Borgonovo to see his family and work on his art. Giacometti was one of the most important sculptors of the 20th century. His work was particularly influenced by artistic styles such as Cubism and Surrealism. Philosophical questions about the human condition, as well as existential and phenomenological debates played a significant role in his work. Around 1935 he gave up on his Surrealist influences to pursue a more deepened analysis of figurative compositions. Giacometti wrote texts for periodicals and exhibition catalogues and recorded his thoughts and memories in notebooks and diaries. His critical nature led to self-doubt about his own work and his self-perceived inability to do justice to his own artistic vision. His insecurities nevertheless remained a powerful motivating artistic force throughout his entire life. Between 1938 and 1944 Giacometti's sculptures had a maximum height of seven centimeters (2.75 inches). Their small size reflected the actual distance between the artist's position and his model. In this context he self-critically stated: "But wanting to create from memory what I had seen, to my terror the sculptures became smaller and smaller". After World War II, Giacometti created his most famous sculptures: his extremely tall and slender figurines. These sculptures were subject to his individual viewing experience—between an imaginary yet real, a tangible yet inaccessible space. In Giacometti's whole body of work, his painting constitutes only a small part. After 1957, however, his figurative paintings were equally as present as his sculptures. The almost monochrome paintings of his late work do not refer to any other artistic styles of modernity. Early life. Giacometti was born in Borgonovo, Switzerland, the eldest of four children of Giovanni Giacometti, a well-known post-Impressionist painter, and Annetta Giacometti-Stampa. He was a descendant of Protestant refugees escaping the inquisition. Coming from an artistic background, he was interested in art from an early age and was encouraged by his father and godfather. Alberto attended the Geneva School of Fine Arts. His brothers Diego (1902–1985) and Bruno (1907–2012) would go on to become artists and architects as well. Additionally, his cousin Zaccaria Giacometti, later professor of constitutional law and chancellor of the University of Zurich, grew up together with them, having been orphaned at the age of 12 in 1905. Career. In 1922, he moved to Paris to study under the sculptor Antoine Bourdelle, an associate of Rodin. It was there that Giacometti experimented with Cubism and Surrealism and came to be regarded as one of the leading Surrealist sculptors. Among his associates were Miró, Max Ernst, Picasso, Bror Hjorth, and Balthus. Between 1936 and 1940, Giacometti concentrated his sculpting on the human head, focusing on the sitter's gaze. He preferred models he was close to—his sister and the artist Isabel Rawsthorne (then known as Isabel Delmer). This was followed by a phase in which his statues of Isabel became stretched out; her limbs elongated. Obsessed with creating his sculptures exactly as he envisioned through his unique view of reality, he often carved until they were as thin as nails and reduced to the size of a pack of cigarettes, much to his consternation. A friend of his once said that if Giacometti decided to sculpt you, "he would make your head look like the blade of a knife". During World War II, Giacometti took refuge in Switzerland. There, in 1946, he met Annette Arm, a secretary for the Red Cross. They married in 1949. After his marriage his tiny sculptures became larger, but the larger they grew, the thinner they became. For the remainder of Giacometti's life, Annette was his main female model. His paintings underwent a parallel procedure. The figures appear isolated and severely attenuated, as the result of continuous reworking. He frequently revisited his subjects: one of his favourite models was his younger brother Diego. Later years. In 1958 Giacometti was asked to create a monumental sculpture for the Chase Manhattan Bank building in New York, which was beginning construction. Although he had for many years "harbored an ambition to create work for a public square", he "had never set foot in New York, and knew nothing about life in a rapidly evolving metropolis. Nor had he ever laid eyes on an actual skyscraper", according to his biographer James Lord. Giacometti's work on the project resulted in the four figures of standing women—his largest sculptures—entitled "Grande femme debout" I through IV (1960). The commission was never completed, however, because Giacometti was unsatisfied by the relationship between the sculpture and the site, and abandoned the project. In 1962, Giacometti was awarded the grand prize for sculpture at the Venice Biennale, and the award brought with it worldwide fame. Even when he had achieved popularity and his work was in demand, he still reworked models, often destroying them or setting them aside to be returned to years later. The prints produced by Giacometti are often overlooked but the catalogue raisonné, "Giacometti – The Complete Graphics and 15 Drawings by Herbert Lust" (Tudor 1970), comments on their impact and gives details of the number of copies of each print. Some of his most important images were in editions of only 30 and many were described as rare in 1970. In his later years Giacometti's works were shown in a number of large exhibitions throughout Europe. Riding a wave of international popularity, and despite his declining health, he traveled to the United States in 1965 for an exhibition of his works at the Museum of Modern Art in New York. As his last work he prepared the text for the book "Paris sans fin", a sequence of 150 lithographs containing memories of all the places where he had lived. Death. Giacometti died in 1966 of heart disease (pericarditis) and chronic obstructive pulmonary disease at the Kantonsspital in Chur, Switzerland. His body was returned to his birthplace in Borgonovo, where he was interred close to his parents. With no children, Annette Giacometti became the sole holder of his property rights. She worked to collect a full listing of authenticated works by her late husband, gathering documentation on the location and manufacture of his works and working to fight the rising number of counterfeited works. When she died in 1993, the Fondation Giacometti was set up by the French state. In May 2007 the executor of his widow's estate, former French foreign minister Roland Dumas, was convicted of illegally selling Giacometti's works to a top auctioneer, Jacques Tajan, who was also convicted. Both were ordered to pay €850,000 to the Alberto and Annette Giacometti Foundation. Artistic analysis. Regarding Giacometti's sculptural technique and according to the Metropolitan Museum of Art: "The rough, eroded, heavily worked surfaces of Three Men Walking (II), 1949, typify his technique. Reduced, as they are, to their very core, these figures evoke lone trees in winter that have lost their foliage. Within this style, Giacometti would rarely deviate from the three themes that preoccupied him—the walking man; the standing, nude woman; and the bust—or all three, combined in various groupings"."" In a letter to Pierre Matisse, Giacometti wrote: "Figures were never a compact mass but like a transparent construction". In the letter, Giacometti writes about how he looked back at the realist, classical busts of his youth with nostalgia, and tells the story of the existential crisis which precipitated the style he became known for. "[I rediscovered] the wish to make compositions with figures. For this I had to make (quickly I thought; in passing), one or two studies from nature, just enough to understand the construction of a head, of a whole figure, and in 1935 I took a model. This study should take, I thought, two weeks and then I could realize my compositions...I worked with the model all day from 1935 to 1940...Nothing was as I imagined. A head, became for me an object completely unknown and without dimensions." Since Giacometti achieved exquisite realism with facility when he was executing busts in his early adolescence, Giacometti's difficulty in re-approaching the figure as an adult is generally understood as a sign of existential struggle for meaning, rather than as a technical deficit. Giacometti was a key player in the Surrealist art movement, but his work resists easy categorization. Some describe it as formalist, others argue it is expressionist or otherwise having to do with what Deleuze calls "blocs of sensation" (as in Deleuze's analysis of Francis Bacon). Even after his excommunication from the Surrealist group, while the intention of his sculpting was usually imitation, the end products were an expression of his emotional response to the subject. He attempted to create renditions of his models the way he saw them, and the way he thought they ought to be seen. He once said that he was sculpting not the human figure but "the shadow that is cast". Scholar William Barrett in "Irrational Man: A Study in Existential Philosophy" (1962), argues that the attenuated forms of Giacometti's figures reflect the view of 20th century modernism and existentialism that modern life is increasingly empty and devoid of meaning. "All the sculptures of today, like those of the past, will end one day in pieces...So it is important to fashion one's work carefully in its smallest recess and charge every particle of matter with life." A 2011–2012 exhibition at the Pinacothèque de Paris focused on showing how Giacometti was inspired by Etruscan art. "Walking Man" and other human figures. Giacometti is best known for the bronze sculptures of tall, thin human figures, made in the years 1945 to 1960. Giacometti was influenced by the impressions he took from the people hurrying in the big city. People in motion he saw as "a succession of moments of stillness". The emaciated figures are often interpreted as an expression of the existential fear, insignificance and loneliness of mankind. The mood of fear in the period of the 1940s and the Cold War is reflected in this figure. It feels sad, lonely and difficult to relate to. Legacy. Exhibitions. Giacometti's work has been the subject of numerous solo exhibitions including the High Museum of Art, Atlanta (1970); Centre Pompidou, Paris (2007–2008); Pushkin Museum, Moscow "The Studio of Alberto Giacometti: Collection of the Fondation Alberto et Annette Giacometti" (2008); Kunsthal Rotterdam (2008); Fondation Beyeler, Basel (2009); Buenos Aires (2012); Kunsthalle Hamburg (2013); Pera Museum, Istanbul (2015); Tate Modern, London (2017); Vancouver Art Gallery, "Alberto Giacometti: A Line Through Time" (2019); National Gallery of Ireland, Dublin (2022). The National Portrait Gallery, London's first solo exhibition of Giacometti's work, "Pure Presence" opened to five star reviews on 13 October 2015 (to 10 January 2016, in honour of the fiftieth anniversary of the artist's death). From April 2019, the Prado Museum in Madrid, has been highlighting Giacometti in an exhibition. Public collections. Giacometti's work is displayed in numerous public collections, including: Art foundations. The Fondation Alberto et Annette Giacometti, having received a bequest from Alberto Giacometti's widow Annette, holds a collection of circa 5,000 works, frequently displayed around the world through exhibitions and long-term loans. A public interest institution, the Foundation was created in 2003 and aims at promoting, disseminating, preserving and protecting Alberto Giacometti's work. The Alberto-Giacometti-Stiftung established in Zürich in 1965, holds a smaller collection of works acquired from the collection of the Pittsburgh industrialist G. David Thompson. Notable sales. According to record Giacometti has sold the two most expensive sculptures in history. In November 2000 a Giacometti bronze, "Grande Femme Debout I", sold for $14.3 million. "Grande Femme Debout II" was bought by the Gagosian Gallery for $27.4 million at Christie's auction in New York City on 6 May 2008. "L'Homme qui marche I", a life-sized bronze sculpture of a man, became one of the most expensive works of art, and at the time was the most expensive sculpture ever sold at auction. It was in February 2010, when it sold for £65 million (US$104.3 million) at Sotheby's, London. "Grande tête mince", a large bronze bust, sold for $53.3 million just three months later. "L'Homme au doigt" ("Pointing Man") sold for $126 million (£81,314,455.32), or $141.3 million with fees, in Christie's May 2015, "Looking Forward to the Past" sale in New York City. The work had been in the same private collection for 45 years. As of now it is the most expensive sculpture sold at auction. After being showcased on the BBC programme "Fake or Fortune", a plaster sculpture, titled "Gazing Head", sold in 2019 for half a million pounds. In April 2021, Giacometti's small-scale bronze sculpture, Nu debout II (1953), was sold from a Japanese private collection and went for £1.5 million ($2 million), against an estimate of £800,000 ($1.1 million). Other legacy. Giacometti created the monument on the grave of Gerda Taro at Père Lachaise Cemetery. In 2001 he was included in the exhibition held at the National Portrait Gallery, London. Giacometti and his sculpture "L'Homme qui marche I" appear on the former 100 Swiss franc banknote. According to a lecture by Michael Peppiatt at Cambridge University on 8 July 2010, Giacometti, who had a friendship with author/playwright Samuel Beckett, created a tree for the set of a 1961 Paris production of "Waiting for Godot". The 2017 movie "Final Portrait" retells the story of his friendship with the biographer James Lord. Giacometti is played by Geoffrey Rush.
2439
Anthem
An anthem is a musical composition of celebration, usually used as a symbol for a distinct group, particularly the national anthems of countries. Originally, and in music theory and religious contexts, it also refers more particularly to short sacred choral work (still frequently seen in Sacred Harp and other types of shape note singing) and still more particularly to a specific form of liturgical music. In this sense, its use began in English-speaking churches; it uses English language words, in contrast to the originally Roman Catholic 'motet' which sets a Latin text. Etymology. "Anthem" is derived from the Greek ("antíphōna") via Old English . Both words originally referred to antiphons, a call-and-response style of singing. The adjectival form is "anthemic". History. Anthems were originally a form of liturgical music. In the Church of England, the rubric appoints them to follow the third collect at morning and evening prayer. Several anthems are included in the British coronation service. The words are selected from Holy Scripture or in some cases from the Liturgy and the music is generally more elaborate and varied than that of psalm or hymn tunes. Being written for a trained choir rather than the congregation, the Anglican anthem is analogous to the motet of the Catholic and Lutheran Churches but represents an essentially English musical form. Anthems may be described as "verse", "full", or "full with verse", depending on whether they are intended for soloists, the full choir, or both. Another way of describing an anthem is that it is a piece of music written specifically to fit a certain accompanying text, and it is often difficult to make any other text fit that same melodic arrangement. It also often changes melody and/or meter, frequently multiple times within a single song, and is sung straight through from start to finish, without repeating the melody for following verses like a normal song (although certain sections may be repeated when marked). An example of an anthem with multiple meter shifts, fuguing, and repeated sections is "Claremont", or "Vital Spark of Heav'nly Flame". Another well known example is William Billing's "Easter Anthem", also known as "The Lord Is Risen Indeed!" after the opening lines. This anthem is still one of the more popular songs in the Sacred Harp tune book. The anthem developed as a replacement for the Catholic "votive antiphon" commonly sung as an appendix to the main office to the Blessed Virgin Mary or other saints. Notable composers of liturgical anthems: historic context. During the Elizabethan period, notable anthems were composed by Thomas Tallis, William Byrd, Tye, and Farrant but they were not mentioned in the Book of Common Prayer until 1662 when the famous rubric "In quires and places where they sing here followeth the Anthem" first appears. Early anthems tended to be simple and homophonic in texture, so that the words could be clearly heard. During the 17th century, notable anthems were composed by Orlando Gibbons, Henry Purcell, and John Blow, with the verse anthem becoming the dominant musical form of the Restoration. In the 18th century, famed anthems were composed by Croft, Boyce, James Kent, James Nares, Benjamin Cooke, and Samuel Arnold. In the 19th century, Samuel Sebastian Wesley wrote anthems influenced by contemporary oratorio which stretch to several movements and last twenty minutes or longer. Later in the century, Charles Villiers Stanford used symphonic techniques to produce a more concise and unified structure. Many anthems have been written since then, generally by specialists in organ music rather than composers, and often in a conservative style. Major composers have usually written anthems in response to commissions and for special occasions: for instance Edward Elgar's 1912 "Great is the Lord" and 1914 "Give unto the Lord" (both with orchestral accompaniment); Benjamin Britten's 1943 "Rejoice in the Lamb" (a modern example of a multi-movement anthem, today heard mainly as a concert piece); and, on a much smaller scale, Ralph Vaughan Williams's 1952 "O Taste and See" written for the coronation of Queen Elizabeth II. With the relaxation of the rule, in England at least, that anthems should only be in English, the repertoire has been greatly enhanced by the addition of many works from the Latin repertoire. Types. The word "anthem" is commonly used to describe any celebratory song or composition for a distinct group, as in national anthems. Further, some songs are artistically styled as anthems, whether or not they are used as such, including Marilyn Manson's "Irresponsible Hate Anthem", Silverchair's "Anthem for the Year 2000", and Toto's "Child's Anthem". National anthem. A national anthem (also state anthem, national hymn, national song, etc.) is generally a patriotic musical composition that evokes and eulogizes the history, traditions, and struggles of a country's people, recognized either by that state's government as the official national song, or by convention through use by the people. The majority of national anthems are marches or hymns in style. The countries of Latin America, Central Asia, and Europe tend towards more ornate and operatic pieces, while those in the Middle East, Oceania, Africa, and the Caribbean use a simpler fanfare. Some countries that are devolved into multiple constituent states have their own official musical compositions for them (such as with the United Kingdom, Russian Federation, and the former Soviet Union); their constituencies' songs are sometimes referred to as national anthems even though they are not sovereign states. Flag anthem. A flag anthem is generally a patriotic musical composition that extols and praises a flag, typically one of a country, in which case it is sometimes called a national flag anthem. It is often either sung or performed during or immediately before the raising or lowering of a flag during a ceremony. Most countries use their respective national anthems or some other patriotic song for this purpose. However, some countries, particularly in South America, use a separate flag anthem for such purposes. Not all countries have flag anthems. Some used them in the past but no longer do so, such as Iran, China, and South Africa. Flag anthems can be officially codified in law, or unofficially recognized by custom and convention. In some countries, the flag anthem may be just another song, and in others, it may be an official symbol of the state akin to a second national anthem, such as in Taiwan. Sports anthem. Many pop songs are used as sports anthems, notably including Queen's "We Are the Champions" and "We Will Rock You", and some sporting events have their own anthems, most notably including UEFA Champions League. Shared anthems. Although anthems are used to distinguish states and territories, there are instances of shared anthems. "Nkosi Sikelel' iAfrika" became a pan-African liberation anthem and was later adopted as the national anthem of five countries in Africa including Zambia, Tanzania, Namibia and Zimbabwe after independence. Zimbabwe and Namibia have since adopted new national anthems. Since 1997, the South African national anthem has been a hybrid song combining new English lyrics with extracts of "Nkosi Sikelel' iAfrika" and the former state anthem "Die Stem van Suid-Afrika". "Hymn to Liberty" is the longest national anthem in the world by length of text. In 1865, the first three stanzas and later the first two officially became the national anthem of Greece and later also that of the Republic of Cyprus. "Forged from the Love of Liberty" was composed as the national anthem for the short-lived West Indies Federation (1958–1962) and was adopted by Trinidad and Tobago when it became independent in 1962. "Esta É a Nossa Pátria Bem Amada" is the national anthem of Guinea-Bissau and was also the national anthem of Cape Verde until 1996. "Oben am jungen Rhein", the national anthem of Liechtenstein, is set to the tune of "God Save the King/Queen". Other anthems that have used the same melody include "Heil dir im Siegerkranz" (Germany), "Kongesangen" (Norway), "My Country, 'Tis of Thee" (United States), "Rufst du, mein Vaterland" (Switzerland), "E Ola Ke Alii Ke Akua" (Hawaii), and "The Prayer of Russians". The Estonian anthem "Mu isamaa, mu õnn ja rõõm" is set to a melody composed in 1848 by Fredrik (Friedrich) Pacius which is also that of the national anthem of Finland: "Maamme" ("Vårt Land" in Swedish). It is also considered to be the ethnic anthem for the Livonian people with lyrics "Min izāmō, min sindimō" ("My Fatherland, my native land"). "Hey, Slavs" is dedicated to Slavic peoples. Its first lyrics were written in 1834 under the title "Hey, Slovaks" ("Hej, Slováci") by Samuel Tomášik and it has since served as the ethnic anthem of the Pan-Slavic movement, the organizational anthem of the Sokol physical education and political movement, the national anthem of Yugoslavia and the transitional anthem of the State Union of Serbia and Montenegro. The song is also considered to be the second, unofficial anthem of the Slovaks. Its melody is based on Mazurek Dąbrowskiego, which has also been the anthem of Poland since 1926, but the Yugoslav variation is much slower and more accentuated. Between 1991 and 1994 "Deșteaptă-te, române!" was the national anthem of both Romania (which adopted it in 1990) and Moldova, but in the case of the latter it was replaced by the current Moldovan national anthem, "Limba noastră". Between 1975 and 1977, the national anthem of Romania "E scris pe tricolor Unire" shared the same melody as the national anthem of Albania "Himni i Flamurit", which is the melody of a Romanian patriotic song "Pe-al nostru steag e scris Unire". The modern national anthem of Germany, "Das Lied der Deutschen", uses the same tune as the 19th- and early 20th-century Austro-Hungarian imperial anthem "Gott erhalte Franz den Kaiser". The "Hymn of the Soviet Union", was used until its dissolution in 1991, and was given new words and adopted by the Russian Federation in 2000 to replace an instrumental national anthem that had been introduced in 1990. "Bro Gozh ma Zadoù", the regional anthem of Brittany and, "Bro Goth Agan Tasow", the Cornish regional anthem, are sung to the same tune as that of the Welsh regional anthem "Hen Wlad Fy Nhadau", with similar words. For parts of states. Some countries, such as the former Soviet Union, Spain, and the United Kingdom, among others, are held to be unions of several "nations" by various definitions. Each of the different "nations" may have their own anthem and these songs may or may not be officially recognized; these compositions are typically referred to as regional anthems though may be known by other names as well (e.g. "state songs" in the United States). Austria. In Austria, the situation is similar to that in Germany. The regional anthem of Upper Austria, the "Hoamatgsang" (), is notable as the only (official) German-language anthem written – and sung – entirely in dialect. Belgium. In Belgium, Wallonia uses "Le Chant des Wallons" and Flanders uses "De Vlaamse Leeuw". Brazil. Most of the Brazilian states have official anthems. Minas Gerais uses an adapted version of the traditional Italian song "Vieni sul mar" as its unofficial anthem. During the Vargas Era (1937–1945) all regional symbols including anthems were banned, but they were legalized again by the Eurico Gaspar Dutra government. Canada. The Canadian province of Newfoundland and Labrador, having been the independent Dominion of Newfoundland before 1949, also has its own regional anthem from its days as a dominion and colony of the UK, the "Ode to Newfoundland". It was the only Canadian province with its own anthem until 2010, when Prince Edward Island adopted the 1908 song "The Island Hymn" as its provincial anthem. Czechoslovakia. Czechoslovakia had a national anthem composed of two parts, the Czech anthem followed by one verse of the Slovak one. After the dissolution of Czechoslovakia, the Czech Republic adopted its own regional anthem as its national one, whereas Slovakia did so with slightly changed lyrics and an additional stanza. Germany. In Germany, many of the Länder (states) have their own anthems, some of which predate the unification of Germany in 1871. A prominent example is the Hymn of Bavaria, which also has the status of an official anthem (and thus enjoys legal protection). There are also several unofficial regional anthems, like the "Badnerlied" and the "Niedersachsenlied". India. Some of the states and union territories of India have officially adopted their own state anthem for use during state government functions. Malaysia. All the individual states of Malaysia have their own anthems. Mexico. In Mexico, after the national anthem was established in 1854, most of the states of the federation adopted their own regional anthems, which often emphasize heroes, virtues or particular landscapes. In particular, the regional anthem of Zacatecas, the "Marcha de Zacatecas", is one of the more well-known of Mexico's various regional anthems. Serbia and Montenegro. In 2005 and 2004 respectively, the Serbian and Montenegrin regions of Serbia and Montenegro adopted their own regional anthems. When the two regions both became independent countries in mid-2006, their regional anthems became their national ones. Soviet Union. Fourteen of the fifteen constituent states of the Soviet Union had their own official song which was used at events connected to that region, and also written and sung in that region's own language. The Russian Soviet Federative Socialist Republic used the Soviet Union's national anthem as its regional anthem ("The Internationale" from 1917 to 1944 and the "National Anthem of the Soviet Union" from 1944 to 1990) until 1990, the last of the Soviet constituent states to do so. After the Soviet Union disbanded in the early 1990s, some of its former constituent states, now sovereign nations in their own right, retained the melodies of their old Soviet-era regional anthems until replacing them or, in some cases, still use them today. Unlike most national anthems, few of which were composed by renowned composers, the Soviet Union's various regional anthems were composed by some of the best Soviet composers, including world-renowned Gustav Ernesaks (Estonia), Aram Khachaturian (Armenia), Otar Taktakishvili (Georgia), and Uzeyir Hajibeyov (Azerbaijan). The lyrics present great similarities, all having mentions to Vladimir Lenin (and most, in their initial versions, to Joseph Stalin, the Armenian and Uzbek anthems being exceptions), to the guiding role of the Communist Party of the Soviet Union, and to the brotherhood of the Soviet peoples, including a specific reference to the friendship of the Russian people (the Estonian, Georgian and Karelo-Finnish anthems were apparently an exception to this last rule). Some of the Soviet regional anthems' melodies can be sung in the Soviet Union anthem lyrics (Ukrainian and Belarus are the most fitted in this case). Most of these regional anthems were replaced with new national ones during or after the dissolution of the Soviet Union; Belarus, Kazakhstan (until 2006), Tajikistan, Turkmenistan (until 1997), and Uzbekistan kept the melodies, but with different lyrics. Russia itself had abandoned the Soviet hymn, replacing it with a tune by Glinka. However, with Vladimir Putin coming to power, the old Soviet tune was restored, with new lyrics written to it. Like the hammer and sickle and red star, the public performance of the anthems of the Soviet Union's various regional anthems the national anthem of the Soviet Union itself are considered as occupation symbols as well as symbols of totalitarianism and state terror by several countries formerly either members of or occupied by the Soviet Union. Accordingly, Latvia, Lithuania, Hungary, and Ukraine have banned those anthems amongst other things deemed to be symbols of fascism, socialism, communism, and the Soviet Union and its republics. In Poland, dissemination of items which are “media of fascist, communist, or other totalitarian symbolism” was criminalized in 1997. However, in 2011 the Constitutional Tribunal found this sanction to be unconstitutional. In contrast to this treatment of the "symbolism", promotion of fascist, communist and other totalitarian "ideology" remains illegal. Those laws do not apply to the anthems of Russia, Belarus, Uzbekistan, Kazakhstan, and Tajikistan which used the melody with different lyrics. Spain. In Spain, the situation is similar to that in Austria and Germany. Unlike the national anthem, most of the anthems of the autonomous communities have words. All are official. Three prominent examples are "Els Segadors" of Catalonia, "Eusko Abendaren Ereserkia" of the Basque Country, and "Os Pinos" of Galicia, all written and sung in the local languages. United Kingdom. The United Kingdom's national anthem is "God Save the King" but its constituent countries and Crown Dependencies also have their own equivalent songs which have varying degrees of official recognition. England, Scotland, Wales, and Northern Ireland each have anthems which are played at occasions such as sports matches and official events. The Isle of Man, a Crown dependency, uses "God Save the King" as a Royal anthem, but also has its own local anthem, "O Land of Our Birth" (Manx: "O Halloo Nyn Ghooie"). United States. Although the United States has "The Star-Spangled Banner" as its official national anthem, all except two of its constituent states and territories also has its own regional anthem (referred to by most US states as a "state song"), along with Washington, DC. The two exceptions are New Jersey, which has never had an official state song, and Maryland, which rescinded "Maryland, My Maryland" in 2021 due to its racist language and has yet to adopt a replacement. The state songs are selected by each state legislature, and/or state governor, as a symbol (or emblem) of that particular US state. Some US states have more than one official state song, and may refer to some of their official songs by other names; for example, Arkansas officially has two state songs, plus a state anthem, and a state historical song. Tennessee has the most state songs, with 9 official state songs and an official bicentennial rap. Arizona has a song that was written specifically as a state anthem in 1915, as well as the 1981 country hit "Arizona", which it adopted as the alternate state anthem in 1982. Two individuals, Stephen Foster, and John Denver, have written or co-written two state songs. Foster's two state songs, "Old Folks at Home" (better known as "Swanee Ribber" or "Suwannee River"), adopted by Florida, and "My Old Kentucky Home" are among the best-known songs in the US On March 12, 2007, the Colorado Senate passed a resolution to make Denver's trademark 1972 hit "Rocky Mountain High" one of the state's two official state songs, sharing duties with its predecessor, "Where the Columbines Grow". On March 7, 2014, the West Virginia Legislature approved a resolution to make Denver's "Take Me Home, Country Roads" one of four official state songs of West Virginia. Governor Earl Ray Tomblin signed the resolution into law on March 8, 2014. Additionally, Woody Guthrie wrote or co-wrote two state "folk songs" – Roll On, Columbia, Roll On and Oklahoma Hills – but they have separate status from the official state "songs" of Washington and Oklahoma, respectively. Other well-known state songs include "Yankee Doodle", "You Are My Sunshine", "Rocky Top", and "Home on the Range"; a number of others are popular standards, including "Oklahoma" (from the Rodgers and Hammerstein musical), Hoagy Carmichael's "Georgia on My Mind", "Tennessee Waltz", "Missouri Waltz", and "On the Banks of the Wabash, Far Away". Many of the others are much less well-known, especially outside the state. New Jersey has no official state song, while Virginia's previous state song, "Carry Me Back to Old Virginny", adopted in 1940, was later rescinded in 1997 due to its racist language by the Virginia General Assembly. In 2015, "Our Great Virginia" was made the new state song of Virginia. Iowa ("The Song of Iowa") uses the tune from the song "O Tannenbaum" as the melody to its official state song. Yugoslavia. In Yugoslavia, each of the country's constituent states (except for Bosnia and Herzegovina) had the right to have its own anthem, but only the Croatian one actually did so initially, later joined by the Slovene one on the brink of the breakup of Yugoslavia. Before 1989, Macedonia did not officially use a regional anthem, even though one was proclaimed during the World War II by ASNOM. International organizations. Larger entities also sometimes have anthems, in some cases known as 'international anthems'. "Lullaby" is the official anthem of UNICEF composed by Steve Barakatt. "The Internationale" is the organizational anthem of various socialist movements. Before March 1944, it was also the anthem of the Soviet Union and the Comintern. ASEAN Way is the official anthem of ASEAN. The tune of the "Ode to Joy" from Beethoven's Symphony No. 9 is the official anthem of the European Union and of the Council of Europe. Let's All Unite and Celebrate is the official anthem of the African Union ("Let Us All Unite and Celebrate Together"). The Olympic Movement also has its own organizational anthem. Esperanto speakers at meetings often use the song "La Espero" as their linguistic anthem. The first South Asian Anthem by poet-diplomat Abhay K may inspire SAARC to come up with an official SAARC Anthem. "Ireland's Call" was commissioned as the sporting anthem of both the Ireland national rugby union team and the Ireland national rugby league team, which are composed of players from both jurisdictions on the island of Ireland, in response to dissatisfaction among Northern Ireland unionists with the use of the Irish national anthem. "Ireland's Call" has since been used by some other all-island bodies. An international anthem also unifies a group of organizations sharing the same appellation such as the International Anthem of the Royal Golf Clubs composed by Steve Barakatt. Same applies to the European Broadcasting Union: the prelude of Te Deum in D Major by Marc-Antoine Charpentier is played before each official Eurovision and Euroradio broadcast. The prelude's first bars are heavily associated with the Eurovision Song Contest. Global anthem. Various artists have created "Earth Anthems" for the entire planet, typically extolling the ideas of planetary consciousness. Though UNESCO have praised the idea of a global anthem, the UN has never adopted an official song.
2440
Albrecht Altdorfer
Albrecht Altdorfer (12 February 1538) was a German painter, engraver and architect of the Renaissance working in Regensburg, Bavaria. Along with Lucas Cranach the Elder and Wolf Huber he is regarded to be the main representative of the Danube School, setting biblical and historical subjects against landscape backgrounds of expressive colours. He is remarkable as one of the first artists to take an interest in landscape as an independent subject. As an artist also making small intricate engravings he is seen to belong to the Nuremberg Little Masters. Biography. Altdorfer was born in Regensburg or Altdorf around 1480. He acquired an interest in art from his father, Ulrich Altdorfer, who was a painter and miniaturist. At the start of his career, he won public attention by creating small, intimate modestly scaled works in unconventional media and with eccentric subject matter. He settled in the free imperial city of Regensburg, a town located on the Danube River in 1505, eventually becoming the town architect and a town councillor. His first signed works date to , including engravings and drawings such the "Stygmata of St. Francis" and "St. Jerome". His models were niellos and copper engravings from the workshops of Jacopo de Barbari and Albrecht Dürer. Around 1511 or earlier, he travelled down the river and south into the Alps, where the scenery moved him so deeply that he became the first landscape painter in the modern sense, making him the leader of the Danube School, a circle that pioneered landscape as an independent genre, in southern Germany. From 1513 he was at the service of Maximilian I in Innsbruck, where he received several commissions from the imperial court. During the turmoil of the Protestant Reformation, he dedicated mostly to architecture; paintings of the period, showing his increasing attention to architecture, include the "Nativity of the Virgin". In 1529, he executed "The Battle of Alexander at Issus" for Duke William IV of Bavaria. In the 1520s he returned to Regensburg as a wealthy man, and became a member of the city's council. He was also responsible for the fortifications of Regensburg. In that period his works are influenced by artists such as Giorgione and Lucas Cranach, as shown by his "Crucifixion". In 1535, he was in Vienna. He died at Regensburg in 1538. The remains of Altdorfer's surviving work comprises 55 panels, 120 drawings, 125 woodcuts, 78 engravings, 36 etchings, 24 paintings on parchment, and fragments from a mural for the bathhouse of the Kaiserhof in Regensburg. This production extends at least over the period 1504–1537. He signed and dated each one of his works. Painting. Altdorfer was the pioneer painter of pure landscape, making them the subject of the painting, as well as compositions dominated by their landscape; these comprise much of his oeuvre. He believed that the human figure should not disrupt nature, but rather participate in it or imitate its natural processes. Taking and developing the landscape style of Lucas Cranach the Elder, he shows the hilly landscape of the Danube valley with thick forests of drooping and crumbling firs and larches hung with moss, and often dramatic colouring from a rising or setting sun. His "Landscape with Footbridge" (National Gallery, London) of 1518–1520 is claimed to be the first pure landscape in oil. In this painting, Altdorfer places a large tree that is cut off by the margins at the center of the landscape, making it the central axis and focus within the piece. Some viewers perceive anthropomorphic stylisation—the tree supposedly exhibiting human qualities such as the drapery of its limbs. He also made many fine finished drawings, mostly landscapes, in pen and watercolour such as the "Landscape with the Woodcutter" in 1522. The drawing opens at ground level on a clearing surrounding an enormous tree that is placed in the center, dominating the picture. Some see the tree pose and gesticulate as if it was human, splaying its branches out in every corner. Halfway up the tree trunk, hangs a gabled shrine. At the time, a shrine like this might shelter an image of the Crucifixion or the Virgin Mary, but since it is turned away from the viewer, we are not sure what it truly is. At the bottom of the tree, a tiny figure of a seated man, crossed legged, holds a knife and axe, declaring his status in society/occupation. Also, he often painted scenes of historical and biblical subjects, set in atmospheric landscapes. His best religious scenes are intense, with their glistening lights and glowing colours sometimes verging on the expressionistic. They often depict moments of intimacy between Christ and his mother, or various saints. His sacral masterpiece and one of the most famous religious works of art of the later Middle Ages is "The Legend of St. Sebastian" and "The Passion of Christ" of the so-called "Sebastian Altar" in "St. Florian's Priory" ("Stift Sankt Florian") near Linz, Upper Austria. When closed the altarpiece displayed the four panels of the legend of St. Sebastian's Martyrdom, while the opened wings displayed the Stations of the Cross. Today the altarpiece is dismantled and the predellas depicting the two final scenes, "Entombment" and "Resurrection" were sold to Kunsthistorisches Museum in Vienna in 1923 and 1930. Both these paintings share a similar formal structure that consists of an open landscape that is seen beyond and through the opening of a dark grotto. The date of completion on the resurrection panel is 1518. Altdorfer often distorts perspective to subtle effect. His donor figures are often painted completely out of scale with the main scene, as in paintings of the previous centuries. He also painted some portraits; overall his painted oeuvre was not large. In his later works, Altdorfer moved more towards mannerism and began to depict the human form to the conformity of the Italian model, as well as dominate the picture with frank colors. Paintings in Munich. His rather atypical "Battle of Issus" (or of "Alexander") of 1529 was commissioned by William IV, Duke of Bavaria as part of a series of eight historical battle scenes destined to hang in the Residenz in Munich. Albrecht Altdorfer's depiction of the moment in 333 BCE when Alexander the Great routed Darius III for supremacy in Asia Minor is vast in ambition, sweeping in scope, vivid in imagery, rich in symbols, and obviously heroic—the Iliad of painting, as literary critic Friedrich Schlegel suggested In the painting, a swarming cast of thousands of soldiers surround the central action: Alexander on his white steed, leading two rows of charging cavalrymen, dashes after a fleeing Darius, who looks anxiously over his shoulder from a chariot. The opposing armies are distinguished by the colors of their uniforms: Darius' army in red and Alexander's in blue. The upper half of "The Battle of Alexander" expands with unreal rapidity into an arcing panorama comprehending vast coiling tracts of globe and sky. The victory also lies on the planar surface; The sun outshone the moon just as the Imperial and allied army successfully repel the Turks. By making the mass number of soldiers blend within the landscape/painting, it shows that he believed that the usage and depiction of landscape was just as significant as a historical event, such as a war. He renounced the office of "Mayor of Regensburg" to accept the commission. Few of his other paintings resemble this apocalyptic scene of two huge armies dominated by an extravagant landscape seen from a very high viewpoint, which looks south over the whole Mediterranean from modern Turkey to include the island of Cyprus and the mouths of the Nile and the Red Sea (behind the isthmus to the left) on the other side. However his style here is a development of that of a number of miniatures of battle-scenes he had done much earlier for Maximilian I in his illuminated manuscript "Triumphal Procession" in 1512–14. It is thought to be the earliest painting to show the curvature of the Earth from a great height. The "Battle" is now in the Alte Pinakothek, which has the best collection of Altdorfer's paintings, including also his small "St. George and the Dragon" (1510), in oil on parchment, where the two figures are tiny and almost submerged in the lush, dense forest that towers over them. Altdorfer seems to exaggerate the measurements of the forest in comparison to the figures: the leaves appear to be larger than the horse, showing the significance of nature and landscape. He also emphasizes line within the work, by displaying the upward growth of the forest with the vertical and diagonal lines of the trunks. There is a small opening of the forest on the lower right hand corner that provides a rest for your eyes. It serves to create depth within the painting and is the only place you can see the characters. The human form is completely absorbed by the thickness of the forest. Fantastic light effects provide a sense of mystery and dissolve the outline of objects. Without the contrast of light, the figures would blend in with its surrounding environment. Altdorfer's figures are invariably the complement of his romantic landscapes; for them he borrowed Albrecht Dürer's inventive iconography, but the panoramic setting is personal and has nothing to do with the fantasy landscapes of the Netherlands A "" (1526) set outside an Italianate skyscraper of a palace shows his interest in architecture. Another small oil on parchment, "Danube Landscape with Castle Wörth" (c. 1520) is one of the earliest accurate topographical paintings of a particular building in its setting, of a type that was to become a cliché in later centuries. Printmaking. Altdorfer was a significant printmaker, with numerous engravings and about ninety-three woodcuts. These included some for the "Triumphs of Maximilian", where he followed the overall style presumably set by Hans Burgkmair, although he was able to escape somewhat from this in his depictions of the more disorderly baggage-train, still coming through a mountain landscape. However most of his best prints are etchings, many of landscapes; in these he was able most easily to use his drawing style. He was one of the most successful early etchers, and was unusual for his generation of German printmakers in doing no book illustrations. He often combined etching and engraving techniques in a single plate, and produced about 122 intaglio prints altogether. Many of Altdorfer's prints are quite small in size, and he is considered to be of the main members of the group of artists known as the Little Masters. Arthur Mayger Hind considers his graphical work to be somewhat lacking in technical skill but with an "intimate personal touch", and notes his characteristic feeling for landscape. Public life. As the superintendent of the municipal buildings Altdorfer had overseen the construction of several commercial structures, such as a slaughterhouse and a building for wine storage, possibly even designing them. He was considered to be an outstanding politician of his day. In 1517 he was a member of the "Ausseren Rates", the council on external affairs, and in this capacity was involved in the expulsion of the Jews, the destruction of the synagogue and in its place the construction of a church and shrine to the Schöne Maria that occurred in 1519. Altdorfer made etchings of the interior of the synagogue and designed a woodcut of the cult image of the Schöne Maria. In 1529–1530 he was also charged with reinforcing certain city fortifications in response to the Turkish threat. Albrecht's brother, Erhard Altdorfer, was also a painter and printmaker in woodcut and engraving, and a pupil of Lucas Cranach the Elder.
2441
House of Ascania
The House of Ascania () was a dynasty of German rulers (see ). It is also known as the House of Anhalt, which refers to its longest-held possession, Anhalt. The Ascanians are named after Ascania (or Ascaria) Castle, known as "Schloss Askanien" in German, which was located near and named after Aschersleben. The castle was the seat of the County of Ascania, a title that was later subsumed into the titles of the princes of Anhalt. History. The earliest known member of the house, Esiko, Count of Ballenstedt, first appears in a document of 1036. He is assumed to have been a grandson (through his mother) of Odo I, Margrave of the Saxon Ostmark. From Odo, the Ascanians inherited large properties in the Saxon Eastern March. Esiko's grandson was Otto, Count of Ballenstedt, who died in 1123. By Otto's marriage to Eilika, daughter of Magnus, Duke of Saxony, the Ascanians became heirs to half of the property of the House of Billung, former dukes of Saxony. Otto's son, Albert the Bear, became, with the help of his mother's inheritance, the first Ascanian duke of Saxony in 1139. However, he soon lost control of Saxony to the rival House of Guelph. Albert inherited the Margraviate of Brandenburg in 1157 from its last Wendish ruler, Pribislav, and he became the first Ascanian margrave. Albert, and his descendants of the House of Ascania, then made considerable progress in Christianizing and Germanizing the lands. As a borderland between German and Slavic cultures, the country was known as a march. In 1237 and 1244, two towns, Cölln and Berlin, were founded during the rule of Otto and Johann, grandsons of Margrave Albert the Bear. Later, they were united into one city, Berlin. The emblem of the House of Ascania, a red eagle and bear, became the heraldic emblems of Berlin. In 1320, the Brandenburg Ascanian line came to an end. After the Emperor had deposed the Guelph rulers of Saxony in 1180, Ascanians returned to rule the Duchy of Saxony, which had been reduced to its eastern half by the Emperor. However, even in eastern Saxony, the Ascanians could establish control only in limited areas, mostly near the River Elbe. In the 13th century, the Principality of Anhalt was split off from the Duchy of Saxony. Later, the remaining state was split into Saxe-Lauenburg and Saxe-Wittenberg. The Ascanian dynasties in the two Saxon states became extinct in 1689 and in 1422, respectively, but Ascanians continued to rule in the smaller state of Anhalt and its various subdivisions until the monarchy was abolished in 1918. Catherine the Great, Empress of Russia from 1762 to 1796, was a member of the House of Ascania, herself the daughter of Christian August, Prince of Anhalt-Zerbst. Rulers of the House of Ascania. House of Ascania. Family Trees. (genealogical list of the dynasty in German) Armorial. The original arms of the house of Ascania, from their ancestors the Saxon counts of Ballenstedt, were ""Barry of ten sable and or". The Ascanian margrave Albert the Bear was invested with the Saxon ducal title in 1138; when he succeeded the Welf's Henry the Lion, who was deposed by Emperor Frederick Barbarossa. In 1180, Albert's son Bernhard, Count of Anhalt received the remaining Saxon territories around Wittenberg and Lauenburg, and the ducal title. Legend, so unlikely to be true, goes that when he rode in front of the emperor, at the occasion of his investiture, he carried a shield with his escutcheon of the Ballenstedt coat of arms ("barry sable and or"). Barbarossa took the rue wreath he wore against the heat of the sun from his head, hanging it over Bernhard's shield and thus creating the Saxonian "crancelin vert" ("Barry of ten sable and or, a crancelin vert""). A more likely explanation is that it probably symbolized the waiver of the Lauenburg lands. From about 1260, the Duchy of Saxe-Wittenberg emerged under the Ascanian duke Albert II, who adopted the tradition of the Saxon stem duchy and was granted the Saxon electoral dignity, against the fierce protest of his Ascanian Saxe-Lauenburg cousins. This was confirmed by the Golden Bull of 1356. As the Ascanian Electors of Saxony also held the High office of an Arch-Marshal of the Holy Roman Empire, they added the ensign "Per fess sable and argent two swords in saltire gules" (the swords later featuring as the trademark of the Meissen china factory) to their coat of arms. When the line became extinct in 1422, the arms and electoral dignity were adopted by the Wettin by margrave Frederick IV of Meissen as it had become synonymous with the Saxon ducal title. When upon German reunification the Free State of Saxony was re-established, the coat of arms was formally confirmed in 1991. The chivalric order was the House Order of Albert the Bear (German: "Hausorden Albrechts des Bären" or "Der Herzoglich Anhaltische Hausorden Albrechts des Bären") which was founded in 1836 as a joint House Order by three dukes of Anhalt from separate branches of the family: Henry, Duke of Anhalt-Köthen, Leopold IV, Duke of Anhalt-Dessau, and Alexander Karl, Duke of Anhalt-Bernburg. The namesake of the order, Albert the Bear, was the first Margrave of Brandenburg from the House of Ascania. The origin of his nickname "the Bear" is unknown.
2443
Acceleration
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the "net" force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes: The SI unit for acceleration is metre per second squared (, formula_1). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralized in reference to the acceleration due to change in speed. Definition and properties. Average acceleration. An object's average acceleration over a period of time is its change in velocity, formula_2, divided by the duration of the period, formula_3. Mathematically, formula_4 Instantaneous acceleration. Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: formula_5 As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to : formula_6 By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity. formula_7 Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time: formula_8 Units. Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. Other forms. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing "centripetal" (directed towards the center) acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): formula_9 where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. Tangential and centripetal acceleration. The velocity of a particle moving on a curved path as a function of time can be written as: formula_10 with equal to the speed of travel along the path, and formula_11 a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: formula_12 where is the unit (inward) normal vector to the particle's trajectory (also called "the principal normal"), and is its instantaneous radius of curvature based upon the osculating circle at time . These components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force). Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas. Special cases. Uniform acceleration. "Uniform" or "constant" acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called "acceleration due to gravity"). By Newton's Second Law the force formula_13 acting on a body is given by: formula_14 Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: formula_15 where In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth. Circular motion. In uniform circular motion, that is moving with constant "speed" along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighboring point, thereby rotating the velocity vector along the circle. Expressing centripetal acceleration vector in polar components, where formula_32 is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields formula_33 As usual in rotations, the speed formula_24 of a particle may be expressed as an "angular speed" with respect to a point at the distance formula_25 as formula_36 Thus formula_37 This acceleration and the mass of the particle determine the necessary centripetal force, directed "toward" the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius formula_25 for the centripetal acceleration. The tangential component is given by the angular acceleration formula_39, i.e., the rate of change formula_40 of the angular speed formula_27 times the radius formula_25. That is, formula_43 The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (formula_39), and the tangent is always directed at right angles to the radius vector. Relation to relativity. Special relativity. The special theory of relativity describes the behavior of objects traveling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. General relativity. Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating.
2444
Conservation and restoration of cultural property
The conservation and restoration of cultural property focuses on protection and care of cultural property (tangible cultural heritage), including artworks, architecture, archaeology, and museum collections. Conservation activities include preventive conservation, examination, documentation, research, treatment, and education. This field is closely allied with conservation science, curators and registrars. Definition. Conservation of cultural property involves protection and restoration using "any methods that prove effective in keeping that property in as close to its original condition as possible for as long as possible." Conservation of cultural heritage is often associated with art collections and museums and involves collection care and management through tracking, examination, documentation, exhibition, storage, preventive conservation, and restoration. The scope has widened from art conservation, involving protection and care of artwork and architecture, to conservation of cultural heritage, also including protection and care of a broad set of other cultural and historical works. Conservation of cultural heritage can be described as a type of ethical stewardship. It may broadly be divided into: Conservation of cultural property applies simple ethical guidelines: Often there are compromises between preserving appearance, maintaining original design and material properties, and ability to reverse changes. Reversibility is now emphasized so as to reduce problems with future treatment, investigation, and use. In order for conservators to decide upon an appropriate conservation strategy and apply their professional expertise accordingly, they must take into account views of the stakeholder, the values, artist's intent, meaning of the work, and the physical needs of the material. Cesare Brandi in his "Theory of Restoration", describes restoration as "the methodological moment in which the work of art is appreciated in its material form and in its historical and aesthetic duality, with a view to transmitting it to the future". History and science. Key dates. Some consider the tradition of conservation of cultural heritage in Europe to have begun in 1565 with the restoration of the Sistine Chapel frescoes, but more ancient examples include the work of Cassiodorus. Brief history. The care of cultural heritage has a long history, one that was primarily aimed at fixing and mending objects for their continued use and aesthetic enjoyment. Until the early 20th century, artists were normally the ones called upon to repair damaged artworks. During the 19th century, however, the fields of science and art became increasingly intertwined as scientists such as Michael Faraday began to study the damaging effects of the environment to works of art. Louis Pasteur carried out scientific analysis on paint as well. However, perhaps the first organized attempt to apply a theoretical framework to the conservation of cultural heritage came with the founding in the United Kingdom of the Society for the Protection of Ancient Buildings in 1877. The society was founded by William Morris and Philip Webb, both of whom were deeply influenced by the writings of John Ruskin. During the same period, a French movement with similar aims was being developed under the direction of Eugène Viollet-le-Duc, an architect and theorist, famous for his restorations of medieval buildings. Conservation of cultural heritage as a distinct field of study initially developed in Germany, where in 1888 Friedrich Rathgen became the first chemist to be employed by a Museum, the Koniglichen Museen, Berlin (Royal Museums of Berlin). He not only developed a scientific approach to the care of objects in the collections, but disseminated this approach by publishing a Handbook of Conservation in 1898. The early development of conservation of cultural heritage in any area of the world is usually linked to the creation of positions for chemists within museums. In British archaeology, key research and technical experimentation in conservation was undertaken by women such as Ione Gedye both in the field and in archaeological collections, particularly those of the Institute of Archaeology, London. In the United Kingdom, pioneering research into painting materials and conservation, ceramics, and stone conservation was conducted by Arthur Pillans Laurie, academic chemist and Principal of Heriot-Watt University from 1900. Laurie's interests were fostered by William Holman Hunt. In 1924 the chemist Harold Plenderleith began to work at the British Museum with Alexander Scott in the recently created Research Laboratory, although he was actually employed by the Department of Scientific and Industrial Research in the early years. Plenderleith's appointment may be said to have given birth to the conservation "profession" in the UK, although there had been craftsmen in many museums and in the commercial art world for generations. This department was created by the museum to address the deteriorating condition of objects in the collection, damages which were a result of their being stored in the London Underground tunnels during the First World War. The creation of this department moved the focus for the development of conservation theory and practice from Germany to Britain, and made the latter a prime force in this fledgling field. In 1956 Plenderleith wrote a significant handbook called The Conservation of Antiquities and Works of Art, which supplanted Rathgen's earlier tome and set new standards for the development of art and conservation science. In the United States, the development of conservation of cultural heritage can be traced to the Fogg Art Museum, and Edward Waldo Forbes, its director from 1909 to 1944. He encouraged technical investigation, and was Chairman of the Advisory Committee for the first technical journal, Technical Studies in the Field of the Fine Arts, published by the Fogg from 1932 to 1942. Importantly he also brought onto the museum staff chemists. Rutherford John Gettens was the first of such in the US to be permanently employed by an art museum. He worked with George L. Stout, the founder and first editor of Technical Studies. Gettens and Stout co-authored Painting Materials: A Short Encyclopaedia in 1942, reprinted in 1966. This compendium is still cited regularly. Only a few dates and descriptions in Gettens' and Stout's book are now outdated. George T. Oliver, of Oliver Brothers Art Restoration and Art Conservation-Boston (Est. 1850 in New York City) invented the vacuum hot table for relining paintings in 1920s; he filed a patent for the table in 1937. Taylor's prototype table, which he designed and constructed, is still in operation. Oliver Brothers is believed to be the first and the oldest continuously operating art restoration company in the United States. The focus of conservation development then accelerated in Britain and America, and it was in Britain that the first International Conservation Organisations developed. The International Institute for Conservation of Historic and Artistic Works (IIC) was incorporated under British law in 1950 as "a permanent organization to co-ordinate and improve the knowledge, methods, and working standards needed to protect and preserve precious materials of all kinds." The rapid growth of conservation professional organizations, publications, journals, newsletters, both internationally and in localities, has spearheaded the development of the conservation profession, both practically and theoretically. Art historians and theorists such as Cesare Brandi have also played a significant role in developing conservation science theory. In recent years ethical concerns have been at the forefront of developments in conservation. Most significantly has been the idea of preventive conservation. This concept is based in part on the pioneering work by Garry Thomson CBE, and his book "Museum Environment", first published in 1978. Thomson was associated with the National Gallery in London; it was here that he established a set of guidelines or environmental controls for the best conditions in which objects could be stored and displayed within the museum environment. Although his exact guidelines are no longer rigidly followed, they did inspire this field of conservation. Conservation laboratories. Conservators routinely use chemical and scientific analysis for the examination and treatment of cultural works. The modern conservation laboratory uses equipment such as microscopes, spectrometers, and various x-ray regime instruments to better understand objects and their components. The data thus collected helps in deciding the conservation treatments to be provided to the object. Ethics. The conservator's work is guided by ethical standards. These take the form of applied ethics. Ethical standards have been established across the world, and national and international ethical guidelines have been written. One such example is: Conservation OnLine provides resources on ethical issues in conservation, including examples of codes of ethics and guidelines for professional conduct in conservation and allied fields; and charters and treaties pertaining to ethical issues involving the preservation of cultural property. As well as standards of practice conservators deal with wider ethical concerns, such as the debates as to whether all art is worth preserving. Keeping up with the international contemporary scenario, recent concerns with sustainability in conservation have emerged. The common understanding that "the care of an artifact should not come at the undue expense of the environment" is generally well accepted within the community and is already contemplated in guidelines of diverse institutions related to the field. Practice. Preventive conservation. Many cultural works are sensitive to environmental conditions such as temperature, humidity and exposure to visible light and ultraviolet radiation. These works must be protected in controlled environments where such variables are maintained within a range of damage-limiting levels. For example, watercolour paintings usually require shielding from sunlight to prevent fading of pigments. Collections care is an important element of museum policy. It is an essential responsibility of members of the museum profession to create and maintain a protective environment for the collections in their care, whether in store, on display, or in transit. A museum should carefully monitor the condition of collections to determine when an artifact requires conservation work and the services of a qualified conservator. Interventive conservation and restoration. A teaching programme of interventive conservation was established in the UK at the Institute of Archaeology by Ione Gedye, which is still teaching interventive conservators today. A principal aim of a cultural conservator is to reduce the rate of deterioration of an object. Both non-interventive and interventive methodologies may be employed in pursuit of this goal. Interventive conservation refers to any direct interaction between the conservator and the material fabric of the object. Interventive actions are carried out for a variety of reasons, including aesthetic choices, stabilization needs for structural integrity, or cultural requirements for intangible continuity. Examples of interventive treatments include the removal of discolored varnish from a painting, the application of wax to a sculpture, and the washing and rebinding of a book. Ethical standards within the field require that the conservator fully justify interventive actions and carry out documentation before, during, and after the treatment. One of the guiding principles of conservation of cultural heritage has traditionally been the idea of reversibility, that all interventions with the object should be fully reversible and that the object should be able to be returned to the state in which it was prior to the conservator's intervention. Although this concept remains a guiding principle of the profession, it has been widely critiqued within the conservation profession and is now considered by many to be "a fuzzy concept." Another important principle of conservation is that all alterations should be well documented and should be clearly distinguishable from the original object. An example of a highly publicized interventive conservation effort would be the conservation work conducted on the Sistine Chapel. Sustainable conservation. Recognising that conservation practices should not harm the environment, harm people, or contribute to global warming, the conservation-restoration profession has more recently focused on practices that reduce waste, reduce energy costs, and minimise the use of toxic or harmful solvents. A number of research projects, working groups, and other initiatives have explored how conservation can become a more environmentally sustainable profession. Sustainable conservation practices apply both to work within cultural institutions (e.g. museums, art galleries, archives, libraries, research centres and historic sites) as well as to businesses and private studios. Choice of materials. Conservators and restorers use a wide variety of materials - in conservation treatments, and those used to safely transport, display and store cultural heritage items. These materials can include solvents, papers and boards, fabrics, adhesives and consolidants, plastics and foams, wood products, and many others. Stability and longevity are two important factors conservators consider when selecting materials; sustainability is becoming an increasingly important third. Examples of sustainable material choices and practices include: These decisions are not always straightforward - for example, installing deionised or distilled water filters in laboratories reduces waste associated with purchasing bottled products, but increases energy consumption. Similarly, locally-made papers and boards may reduce inherent carbon miles but they may be made with pulp sourced from old growth forests. Another dilemma is that many conservation-grade materials are chosen because they do not biodegrade. For example, when selecting a plastic with which to make storage enclosures, conservators prefer to use relatively long-lived plastics because they have better ageing properties - they are less likely to become yellow, leach plasticisers, or lose structural integrity and crumble (examples include polyethylene, polypropylene, and polyester). These plastics will also take longer to degrade in landfill. Energy use. Many conservators and cultural organisations have sought to reduce the energy costs associated with controlling indoor storage and display environments (temperature, relative humidity, air filtration, and lighting levels) as well as those associated with the transport of cultural heritage items for exhibitions and loans. In general, lowering the temperature reduces the rate at which damaging chemical reactions occur within materials. For example, storing cellulose acetate film at 10 °C instead of 21 °C is estimated to increase its usable life by over 100 years. Controlling the relative humidity of air helps to reduce hydrolysis reactions and minimises cracking, distortion and other physical changes in hygroscopic materials. Changes in temperature will also bring about changes in relative humidity. Therefore, the conservation profession has placed great importance on controlling indoor environments. Temperature and humidity can be controlled through passive means (e.g. insulation, building design) or active means (air conditioning). Active controls typically require much higher energy use. Energy use increases with specificity - e.g. in will require more energy to maintain a quantity of air to a narrow temperature range (20-22 °C) than to a broad range (18-25 °C). In the past, conservation recommendations have often called for very tight, inflexible temperature and relative humidity set points. In other cases, conservators have recommended strict environmental conditions for buildings that could not reasonably be expected to achieve them, due to the quality of build, local environmental conditions (e.g. recommending temperate conditions for a building located in the tropics) or the financial circumstances of the organisation. This has been an area of particular debate for cultural heritage organisations who lend and borrow cultural items to each other - often, the lender will specify strict environmental conditions as part of the loan agreement, which may be very expensive for the borrowing organisation to achieve, or impossible. The energy costs associated with cold storage and digital storage are also gaining more attention. Cold storage is a very effective strategy to preserve at-risk collections such as cellulose nitrate and cellulose acetate film, which can deteriorate beyond use within decades at ambient conditions. Digital storage costs are rising for both born-digital cultural heritage (photographs, audiovisual, time-based media) and to store digital preservation and access copies of cultural heritage. Digital storage capacity is a major factor in the complexity of preserving digital heritage such as video games, social media, messaging services, and email. Other areas where energy use can be reduced within conservation and restoration include: Country by country look. United States. Heritage Preservation, in partnership with the Institute of Museum and Library Services, a U.S. federal agency, produced The Heritage Health Index. The results of this work was the report "A Public Trust at Risk: The Heritage Health Index Report on the State of America's Collections", which was published in December 2005 and concluded that immediate action is needed to prevent the loss of 190 million artifacts that are in need of conservation treatment. The report made four recommendations: United Kingdom. In October 2006, the Department for Culture, Media and Sport, a governmental department, authored a document: "Understanding the Future: Priorities for England's Museums". This document was based on several years of consultation aimed to lay out the government's priorities for museums in the 21st century. The document listed the following as priorities for the next decade: The conservation profession response to this report was on the whole less than favourable, the Institute of Conservation (ICON) published their response under the title "A Failure of Vision". It had the following to say: Concluding: Further to this the ICON website summary report lists the following specific recommendations: In November 2008, the UK-based think tank Demos published an influential pamphlet entitled "It's a material world: caring for the public realm", in which they argue for integrating the public directly into efforts to conserve material culture, particularly that which is in the public, their argument, as stated on page 16, demonstrates their belief that society can benefit from conservation as a paradigm as well as a profession: Training. Training in conservation of cultural heritage for many years took the form of an apprenticeship, whereby an apprentice slowly developed the necessary skills to undertake their job. For some specializations within conservation this is still the case. However, it is more common in the field of conservation today that the training required to become a practicing conservator comes from a recognized university course in conservation of cultural heritage. The university can rarely provide all the necessary training in first hand experience that an apprenticeship can, and therefore in addition to graduate level training the profession also tends towards encouraging conservation students to spend time as an intern. Conservation of cultural heritage is an interdisciplinary field as conservators have backgrounds in the fine arts, sciences (including chemistry, biology, and materials science), and closely related disciplines, such as art history, archaeology, and anthropology. They also have design, fabrication, artistic, and other special skills necessary for the practical application of that knowledge. Within the various schools that teach conservation of cultural heritage, the approach differs according to the educational and vocational system within the country, and the focus of the school itself. This is acknowledged by the American Institute for Conservation who advise "Specific admission requirements differ and potential candidates are encouraged to contact the programs directly for details on prerequisites, application procedures, and program curriculum". In France, training for heritage conservation is taught by four schools : , L'École supérieure des Beaux-Arts Tours, Angers, Le Mans, L'Université Paris 1 Panthéon-Sorbonne, Institut national du patrimoine. Associations and professional organizations. Societies devoted to the care of cultural heritage have been in existence around the world for many years. One early example is the founding in 1877 of the Society for the Protection of Ancient Buildings in Britain to protect the built heritage, this society continues to be active today. The 14th Dalai Lama and the Tibetan people work to preserve their cultural heritage with organizations including the Tibetan Institute of Performing Arts and an international network of eight Tibet Houses. The built heritage was at the forefront of the growth of member based organizations in the United States. Preservation Virginia, founded in Richmond in 1889 as the Association for the Preservation of Virginia Antiquities, was the United States' first statewide historic preservation group. Today, professional conservators join and take part in the activities of numerous conservation associations and professional organizations with the wider field, and within their area of specialization. In Europe, E.C.C.O. European Confederation of Conservator-Restorers Organisations was established in 1991 by 14 European Conservator-Restorers' Organisations. Currently representing close to 6.000 professionals within 23 countries and 26 members organisations, including one international body (IADA), E.C.C.O. embodies the field of preservation of cultural heritage, both movable and immovable. These organizations exist to "support the conservation professionals who preserve our cultural heritage". This involves upholding professional standards, promoting research and publications, providing educational opportunities, and fostering the exchange of knowledge among cultural conservators, allied professionals, and the public.
2447
Anton Chekhov
Anton Pavlovich Chekhov (; 29 January 1860 – 15 July 1904) was a Russian playwright and short-story writer who is considered to be one of the greatest writers of all time. His career as a playwright produced four classics, and his best short stories are held in high esteem by writers and critics. Along with Henrik Ibsen and August Strindberg, Chekhov is often referred to as one of the three seminal figures in the birth of early modernism in the theatre. Chekhov was a physician by profession. "Medicine is my lawful wife", he once said, "and literature is my mistress." Chekhov renounced the theatre after the reception of "The Seagull" in 1896, but the play was revived to acclaim in 1898 by Konstantin Stanislavski's Moscow Art Theatre, which subsequently also produced Chekhov's "Uncle Vanya" and premiered his last two plays, "Three Sisters" and "The Cherry Orchard". These four works present a challenge to the acting ensemble as well as to audiences, because in place of conventional action Chekhov offers a "theatre of mood" and a "submerged life in the text". The plays that Chekhov wrote were not complex, but easy to follow, and created a somewhat haunting atmosphere for the audience. Chekhov at first wrote stories to earn money, but as his artistic ambition grew, he made formal innovations that influenced the evolution of the modern short story. He made no apologies for the difficulties this posed to readers, insisting that the role of an artist was to ask questions, not to answer them. Biography. Childhood. Anton Chekhov was born into a Russian family on the feast day of St. Anthony the Great (17 January Old Style) 29 January 1860 in Taganrog, a port on the Sea of Azov – on Politseyskaya (Police) street, later renamed Chekhova street – in southern Russia. He was the third of six surviving children. His father, Pavel Yegorovich Chekhov, the son of a former serf and his wife, was from the village Olkhovatka (Voronezh Governorate) and ran a grocery store. A director of the parish choir, devout Orthodox Christian, and physically abusive father, Pavel Chekhov has been seen by some historians as the model for his son's many portraits of hypocrisy. Chekhov's mother, Yevgeniya (Morozova), was an excellent storyteller who entertained the children with tales of her travels with her cloth-merchant father all over Russia. "Our talents we got from our father," Chekhov remembered, "but our soul from our mother." In adulthood, Chekhov criticised his brother Alexander's treatment of his wife and children by reminding him of Pavel's tyranny: "Let me ask you to recall that it was despotism and lying that ruined your mother's youth. Despotism and lying so mutilated our childhood that it's sickening and frightening to think about it. Remember the horror and disgust we felt in those times when Father threw a tantrum at dinner over too much salt in the soup and called Mother a fool." Chekhov attended the Greek School in Taganrog and the Taganrog "Gymnasium" (since renamed the Chekhov Gymnasium), where he was held back for a year at fifteen for failing an examination in Ancient Greek. He sang at the Greek Orthodox monastery in Taganrog and in his father's choirs. In a letter of 1892, he used the word "suffering" to describe his childhood and recalled: In 1876, Chekhov's father was declared bankrupt after overextending his finances building a new house, having been cheated by a contractor named Mironov. To avoid debtor's prison he fled to Moscow, where his two eldest sons, Alexander and Nikolai, were attending university. The family lived in poverty in Moscow. Chekhov's mother was physically and emotionally broken by the experience. Chekhov was left behind to sell the family's possessions and finish his education. He remained in Taganrog for three more years, boarding with a man by the name of Selivanov who, like Lopakhin in "The Cherry Orchard", had bailed out the family for the price of their house. Chekhov had to pay for his own education, which he managed by private tutoring, catching and selling goldfinches, and selling short sketches to the newspapers, among other jobs. He sent every ruble he could spare to his family in Moscow, along with humorous letters to cheer them up. During this time, he read widely and analytically, including the works of Cervantes, Turgenev, Goncharov, and Schopenhauer, and wrote a full-length comic drama, "Fatherless", which his brother Alexander dismissed as "an inexcusable though innocent fabrication". Chekhov also experienced a series of love affairs, one with the wife of a teacher. In 1879, Chekhov completed his schooling and joined his family in Moscow, having gained admission to the medical school at I.M. Sechenov First Moscow State Medical University. Early writings. Chekhov then assumed responsibility for the whole family. To support them and to pay his tuition fees, he wrote daily short, humorous sketches and vignettes of contemporary Russian life, many under pseudonyms such as "Antosha Chekhonte" (Антоша Чехонте) and "Man Without Spleen" (Человек без селезенки). His prodigious output gradually earned him a reputation as a satirical chronicler of Russian street life, and by 1882 he was writing for "Oskolki" ("Fragments"), owned by Nikolai Leykin, one of the leading publishers of the time. Chekhov's tone at this stage was harsher than that familiar from his mature fiction. In 1884, Chekhov qualified as a physician, which he considered his principal profession though he made little money from it and treated the poor free of charge. In 1884 and 1885, Chekhov found himself coughing blood, and in 1886 the attacks worsened, but he would not admit his tuberculosis to his family or his friends. He confessed to Leykin, "I am afraid to submit myself to be sounded by my colleagues." He continued writing for weekly periodicals, earning enough money to move the family into progressively better accommodations. Early in 1886 he was invited to write for one of the most popular papers in St. Petersburg, "Novoye Vremya" ("New Times"), owned and edited by the millionaire magnate Alexey Suvorin, who paid a rate per line double Leykin's and allowed Chekhov three times the space. Suvorin was to become a lifelong friend, perhaps Chekhov's closest. Before long, Chekhov was attracting literary as well as popular attention. The sixty-four-year-old Dmitry Grigorovich, a celebrated Russian writer of the day, wrote to Chekhov after reading his short story "The Huntsman" that "You have "real" talent, a talent that places you in the front rank among writers in the new generation." He went on to advise Chekhov to slow down, write less, and concentrate on literary quality. Chekhov replied that the letter had struck him "like a thunderbolt" and confessed, "I have written my stories the way reporters write up their notes about fires—mechanically, half-consciously, caring nothing about either the reader or myself." The admission may have done Chekhov a disservice, since early manuscripts reveal that he often wrote with extreme care, continually revising. Grigorovich's advice nevertheless inspired a more serious, artistic ambition in the twenty-six-year-old. In 1888, with a little string-pulling by Grigorovich, the short story collection "At Dusk" ("V Sumerkakh") won Chekhov the coveted Pushkin Prize "for the best literary production distinguished by high artistic worth". Turning points. In 1887, exhausted from overwork and ill health, Chekhov took a trip to Ukraine, which reawakened him to the beauty of the steppe. On his return, he began the novella-length short story "The Steppe", which he called "something rather odd and much too original", and which was eventually published in "Severny Vestnik" ("The Northern Herald"). In a narrative that drifts with the thought processes of the characters, Chekhov evokes a chaise journey across the steppe through the eyes of a young boy sent to live away from home, and his companions, a priest and a merchant. "The Steppe" has been called a "dictionary of Chekhov's poetics", and it represented a significant advance for Chekhov, exhibiting much of the quality of his mature fiction and winning him publication in a literary journal rather than a newspaper. In autumn 1887, a theatre manager named Korsh commissioned Chekhov to write a play, the result being "Ivanov", written in a fortnight and produced that November. Though Chekhov found the experience "sickening" and painted a comic portrait of the chaotic production in a letter to his brother Alexander, the play was a hit and was praised, to Chekhov's bemusement, as a work of originality. Although Chekhov did not fully realise it at the time, Chekhov's plays, such as "The Seagull" (written in 1895), "Uncle Vanya" (written in 1897), "The Three Sisters" (written in 1900), and "The Cherry Orchard" (written in 1903) served as a revolutionary backbone to what is common sense to the medium of acting to this day: an effort to recreate and express the realism of how people truly act and speak with each other. This realistic manifestation of the human condition may engender in audiences reflection upon what it means to be human. This philosophy of approaching the art of acting has stood not only steadfast, but as the cornerstone of acting for much of the 20th century to this day. Mikhail Chekhov considered "Ivanov" a key moment in his brother's intellectual development and literary career. From this period comes an observation of Chekhov's that has become known as "Chekhov's gun", a dramatic principle that requires that every element in a narrative be necessary and irreplaceable, and that everything else be removed. The death of Chekhov's brother Nikolai from tuberculosis in 1889 influenced "A Dreary Story", finished that September, about a man who confronts the end of a life that he realises has been without purpose. Mikhail Chekhov, who recorded his brother's depression and restlessness after Nikolai's death, was researching prisons at the time as part of his law studies, and Anton Chekhov, in a search for purpose in his own life, himself soon became obsessed with the issue of prison reform. Sakhalin. In 1890, Chekhov undertook an arduous journey by train, horse-drawn carriage, and river steamer to the Russian Far East and the "katorga", or penal colony, on Sakhalin Island, north of Japan, where he spent three months interviewing thousands of convicts and settlers for a census. The letters Chekhov wrote during the two-and-a-half-month journey to Sakhalin are considered to be among his best. His remarks to his sister about Tomsk were to become notorious. Chekhov witnessed much on Sakhalin that shocked and angered him, including floggings, embezzlement of supplies, and forced prostitution of women. He wrote, "There were times I felt that I saw before me the extreme limits of man's degradation." He was particularly moved by the plight of the children living in the penal colony with their parents. For example: Chekhov later concluded that charity was not the answer, but that the government had a duty to finance humane treatment of the convicts. His findings were published in 1893 and 1894 as "Ostrov Sakhalin" ("The Island of Sakhalin"), a work of social science, not literature. Chekhov found literary expression for the "Hell of Sakhalin" in his long short story "", the last section of which is set on Sakhalin, where the murderer Yakov loads coal in the night while longing for home. Chekhov's writing on Sakhalin, especially the traditions and habits of the Gilyak people, is the subject of a sustained meditation and analysis in Haruki Murakami's novel "1Q84". It is also the subject of a poem by the Nobel Prize winner Seamus Heaney, "Chekhov on Sakhalin" (collected in the volume "Station Island"). Rebecca Gould has compared Chekhov's book on Sakhalin to Katherine Mansfield's "Urewera Notebook" (1907). In 2013, the Wellcome Trust-funded play 'A Russian Doctor', performed by Andrew Dawson and researched by Professor Jonathan Cole, explored Chekhov's experiences on Sakhalin Island. Melikhovo. Mikhail Chekhov, a member of the household at Melikhovo, described the extent of his brother's medical commitments: Chekhov's expenditure on drugs was considerable, but the greatest cost was making journeys of several hours to visit the sick, which reduced his time for writing. However, Chekhov's work as a doctor enriched his writing by bringing him into intimate contact with all sections of Russian society: for example, he witnessed at first hand the peasants' unhealthy and cramped living conditions, which he recalled in his short story "Peasants". Chekhov visited the upper classes as well, recording in his notebook: "Aristocrats? The same ugly bodies and physical uncleanliness, the same toothless old age and disgusting death, as with market-women." In 1893/1894 he worked as a Zemstvo doctor in Zvenigorod, which has numerous sanatoriums and rest homes. A local hospital is named after him. In 1894, Chekhov began writing his play "The Seagull" in a lodge he had built in the orchard at Melikhovo. In the two years since he had moved to the estate, he had refurbished the house, taken up agriculture and horticulture, tended the orchard and the pond, and planted many trees, which, according to Mikhail, he "looked after ... as though they were his children. Like Colonel Vershinin in his "Three Sisters", as he looked at them he dreamed of what they would be like in three or four hundred years." The first night of "The Seagull", at the Alexandrinsky Theatre in St. Petersburg on 17 October 1896, was a fiasco, as the play was booed by the audience, stinging Chekhov into renouncing the theatre. But the play so impressed the theatre director Vladimir Nemirovich-Danchenko that he convinced his colleague Konstantin Stanislavski to direct a new production for the innovative Moscow Art Theatre in 1898. Stanislavski's attention to psychological realism and ensemble playing coaxed the buried subtleties from the text, and restored Chekhov's interest in playwriting. The Art Theatre commissioned more plays from Chekhov and the following year staged "Uncle Vanya", which Chekhov had completed in 1896. In the last decades of his life he became an atheist. Yalta. In March 1897, Chekhov suffered a major haemorrhage of the lungs while on a visit to Moscow. With great difficulty he was persuaded to enter a clinic, where doctors diagnosed tuberculosis on the upper part of his lungs and ordered a change in his manner of life. After his father's death in 1898, Chekhov bought a plot of land on the outskirts of Yalta and built a villa (The White Dacha), into which he moved with his mother and sister the following year. Though he planted trees and flowers, kept dogs and tame cranes, and received guests such as Leo Tolstoy and Maxim Gorky, Chekhov was always relieved to leave his "hot Siberia" for Moscow or travels abroad. He vowed to move to Taganrog as soon as a water supply was installed there. In Yalta he completed two more plays for the Art Theatre, composing with greater difficulty than in the days when he "wrote serenely, the way I eat pancakes now". He took a year each over "Three Sisters" and "The Cherry Orchard". On 25 May 1901, Chekhov married Olga Knipper quietly, owing to his horror of weddings. She was a former protégée and sometime lover of Vladimir Nemirovich-Danchenko whom he had first met at rehearsals for "The Seagull". Up to that point, Chekhov, known as "Russia's most elusive literary bachelor", had preferred passing liaisons and visits to brothels over commitment. He had once written to Suvorin: The letter proved prophetic of Chekhov's marital arrangements with Olga: he lived largely at Yalta, she in Moscow, pursuing her acting career. In 1902, Olga suffered a miscarriage; and Donald Rayfield has offered evidence, based on the couple's letters, that conception occurred when Chekhov and Olga were apart, although other Russian scholars have rejected that claim. The literary legacy of this long-distance marriage is a correspondence that preserves gems of theatre history, including shared complaints about Stanislavski's directing methods and Chekhov's advice to Olga about performing in his plays. In Yalta, Chekhov wrote one of his most famous stories, "The Lady with the Dog" (also translated from the Russian as "Lady with Lapdog"), which depicts what at first seems a casual liaison between a cynical married man and an unhappy married woman who meet while holidaying in Yalta. Neither expects anything lasting from the encounter. Unexpectedly though, they gradually fall deeply in love and end up risking scandal and the security of their family lives. The story masterfully captures their feelings for each other, the inner transformation undergone by the disillusioned male protagonist as a result of falling deeply in love, and their inability to resolve the matter by either letting go of their families or of each other. Death. In May 1903, Chekhov visited Moscow; the prominent lawyer Vasily Maklakov visited him almost every day. Maklakov signed Chekhov's will. By May 1904, Chekhov was terminally ill with tuberculosis. Mikhail Chekhov recalled that "everyone who saw him secretly thought the end was not far off, but the nearer [he] was to the end, the less he seemed to realise it". On 3 June, he set off with Olga for the German spa town of Badenweiler in the Black Forest in Germany, from where he wrote outwardly jovial letters to his sister Masha, describing the food and surroundings, and assuring her and his mother that he was getting better. In his last letter, he complained about the way German women dressed. Chekhov died on 15 July 1904 at the age of 44 after a long fight with tuberculosis, the same disease that killed his brother. Chekhov's death has become one of "the great set pieces of literary history"retold, embroidered, and fictionalized many times since, notably in the 1987 short story "Errand" by Raymond Carver. In 1908, Olga wrote this account of her husband's last moments: Chekhov's body was transported to Moscow in a refrigerated railway-car meant for oysters, a detail that offended Gorky. Some of the thousands of mourners followed the funeral procession of a General Keller by mistake, to the accompaniment of a military band. Chekhov was buried next to his father at the Novodevichy Cemetery. Legacy. A few months before he died, Chekhov told the writer Ivan Bunin that he thought people might go on reading his writings for seven years. "Why seven?", asked Bunin. "Well, seven and a half", Chekhov replied. "That's not bad. I've got six years to live." Chekhov's posthumous reputation greatly exceeded his expectations. The ovations for the play "The Cherry Orchard" in the year of his death served to demonstrate the Russian public's acclaim for the writer, which placed him second in literary celebrity only to Tolstoy, who outlived him by six years. Tolstoy was an early admirer of Chekhov's short stories and had a series that he deemed "first quality" and "second quality" bound into a book. In the first category were: "Children", "The Chorus Girl", "A Play", "Home", "Misery", "The Runaway", "In Court", "Vanka", "Ladies", "A Malefactor", "The Boys", "Darkness", "Sleepy", "The Helpmate", and "The Darling"; in the second: "A Transgression", "Sorrow", "The Witch", "Verochka", "In a Strange Land", "The Cook's Wedding", "A Tedious Business", "An Upheaval", "Oh! The Public!", "The Mask", "A Woman's Luck", "Nerves", "The Wedding", "A Defenceless Creature", and "Peasant Wives." Chekhov's work also found praise from several of Russia's most influential radical political thinkers. If anyone doubted the gloom and miserable poverty of Russia in the 1880s, the anarchist theorist Peter Kropotkin responded, "read only Chekhov's novels!" Raymond Tallis further recounts that Vladimir Lenin believed his reading of the short story Ward No. 6 "made him a revolutionary". Upon finishing the story, Lenin is said to have remarked: "I absolutely had the feeling that I was shut up in Ward 6 myself!" In Chekhov's lifetime, British and Irish critics generally did not find his work pleasing; E. J. Dillon thought "the effect on the reader of Chekhov's tales was repulsion at the gallery of human waste represented by his fickle, spineless, drifting people" and R. E. C. Long said "Chekhov's characters were repugnant, and that Chekhov revelled in stripping the last rags of dignity from the human soul". After his death, Chekhov was reappraised. Constance Garnett's translations won him an English-language readership and the admiration of writers such as James Joyce, Virginia Woolf, and Katherine Mansfield, whose story "The Child Who Was Tired" is similar to Chekhov's "Sleepy". The Russian critic D. S. Mirsky, who lived in England, explained Chekhov's popularity in that country by his "unusually complete rejection of what we may call the heroic values". In Russia itself, Chekhov's drama fell out of fashion after the revolution, but it was later incorporated into the Soviet canon. The character of Lopakhin, for example, was reinvented as a hero of the new order, rising from a modest background so as eventually to possess the gentry's estates. Despite Chekhov's reputation as a playwright, William Boyd asserts that his short stories represent the greater achievement. Raymond Carver, who wrote the short story "Errand" about Chekhov's death, believed that Chekhov was the greatest of all short story writers: Style. One of the first non-Russians to praise Chekhov's plays was George Bernard Shaw, who subtitled his "Heartbreak House" "A Fantasia in the Russian Manner on English Themes", and pointed out similarities between the predicament of the British landed class and that of their Russian counterparts as depicted by Chekhov: "the same nice people, the same utter futility". Ernest Hemingway, another writer influenced by Chekhov, was more grudging: "Chekhov wrote about six good stories. But he was an amateur writer." And Vladimir Nabokov criticised Chekhov's "medley of dreadful prosaisms, ready-made epithets, repetitions". But he also declared "yet it is his works which I would take on a trip to another planet" and called "The Lady with the Dog" "one of the greatest stories ever written" in its depiction of a problematic relationship, and described Chekhov as writing "the way one person relates to another the most important things in his life, slowly and yet without a break, in a slightly subdued voice". For the writer William Boyd, Chekhov's historical accomplishment was to abandon what William Gerhardie called the "event plot" for something more "blurred, interrupted, mauled or otherwise tampered with by life". Virginia Woolf mused on the unique quality of a Chekhov story in "The Common Reader" (1925): Michael Goldman has said of the elusive quality of Chekhov's comedies: "Having learned that Chekhov is comic ... Chekhov is comic in a very special, paradoxical way. His plays depend, as comedy does, on the vitality of the actors to make pleasurable what would otherwise be painfully awkward—inappropriate speeches, missed connections, "faux pas", stumbles, childishness—but as part of a deeper pathos; the stumbles are not pratfalls but an energized, graceful dissolution of purpose." Influence on dramatic arts. In the United States, Chekhov's reputation began its rise slightly later, partly through the influence of Stanislavski's system of acting, with its notion of subtext: "Chekhov often expressed his thought not in speeches", wrote Stanislavski, "but in pauses or between the lines or in replies consisting of a single word ... the characters often feel and think things not expressed in the lines they speak." The Group Theatre, in particular, developed the subtextual approach to drama, influencing generations of American playwrights, screenwriters, and actors, including Clifford Odets, Elia Kazan and, in particular, Lee Strasberg. In turn, Strasberg's Actors Studio and the "Method" acting approach influenced many actors, including Marlon Brando and Robert De Niro, though by then the Chekhov tradition may have been distorted by a preoccupation with realism. In 1981, the playwright Tennessee Williams adapted "The Seagull" as "The Notebook of Trigorin". One of Anton's nephews, Michael Chekhov, would also contribute heavily to modern theatre, particularly through his unique acting methods which developed Stanislavski's ideas further. Alan Twigg, the chief editor and publisher of the Canadian book review magazine "B.C. BookWorld" wrote: Chekhov has also influenced the work of Japanese playwrights including Shimizu Kunio, Yōji Sakate, and Ai Nagai. Critics have noted similarities in how Chekhov and Shimizu use a mixture of light humour as well as an intense depictions of longing. Sakate adapted several of Chekhov's plays and transformed them in the general style of "nō". Nagai also adapted Chekhov's plays, including "Three Sisters", and transformed his dramatic style into Nagai's style of satirical realism while emphasising the social issues depicted in the play. Chekhov's works have been adapted for the screen, including Sidney Lumet's "Sea Gull" and Louis Malle's "Vanya on 42nd Street". Laurence Olivier's final effort as a film director was a 1970 adaptation of "Three Sisters" in which he also played a supporting role. His work has also served as inspiration or been referenced in numerous films. In Andrei Tarkovsky's 1975 film "The Mirror", characters discuss his short story "Ward No. 6". Woody Allen has been influenced by Chekhov and references to his works are present in many of his films including "Love and Death" (1975), "Interiors" (1978) and "Hannah and Her Sisters" (1986). Plays by Chekhov are also referenced in François Truffaut's 1980 drama film "The Last Metro", which is set in a theatre. "The Cherry Orchard" has a role in the comedy film "Henry's Crime" (2011). A portion of a stage production of "Three Sisters" appears in the 2014 drama film "Still Alice". The 2022 Foreign Language Oscar winner, "Drive My Car", is centered on a production of "Uncle Vanya". Several of Chekhov's short stories were adapted as episodes of the 1986 Indian anthology television series "Katha Sagar". Another Indian television series titled "Chekhov Ki Duniya" aired on DD National in the 1990s, adapting different works of Chekhov. Nuri Bilge Ceylan's Palme d'Or winner "Winter Sleep" was adapted from the short story "The Wife" by Anton Chekhov.
2448
Action Against Hunger
Action Against Hunger () is a global humanitarian organization which originated in France and is committed to ending world hunger. The organization helps malnourished children and provides communities with access to safe water and sustainable solutions to hunger. In 2020, Action Against Hunger worked in 51 countries around the world with more than 8,300 employees and volunteers helping 13.6 million people in need. Action Against Hunger was established in 1979 by a group of French doctors, scientists, and writers. Nobel Prize-winning physicist Alfred Kastler served as the organization's first chairman. Currently, Mumbai-based businessman and philanthropist Ashwini Kakkar serves as International President of Action Against Hunger network. The group initially provided assistance to Afghan refugees in Pakistan, famine-stricken Ugandan communities, and Cambodian refugees in Thailand. It expanded to address additional humanitarian concerns in Africa, the Middle East, Southeast Asia, the Balkans, and elsewhere during the 1980s and 1990s. Action Against Hunger's Scientific Committee pioneered the therapeutic milk formula (F100), now used by all major humanitarian aid organizations to treat acute malnutrition. Early results showed that treatment with F100 has the capacity to reduce the mortality rate of severely malnourished children to below 5%, with a median hospital fatality rate quoted of 23.5% A few years later, the therapeutic milk was repackaged as ready-to-use therapeutic foods (RUTFs), a peanut-based paste packaged like a power bar. These bars allow for the treatment of malnutrition at home and do not require any preparation or refrigeration. The international network currently has headquarters in six countries – France, Spain, the United States, Canada, Italy, the UK. Its four main areas of work include nutrition, food security, water and sanitation, and advocacy. The integrated approaches with various sectors of intervention are: In 2022, Action Against Hunger USA is leading a USAID-funded project to address health and nutrition challenges associated with policy, advocacy, financing, and governance in communities around the world, and will work in partnership with leading organizations such as Pathfinder International, Amref Health Africa, Global Communities, Humanity & Inclusion, Kupenda for the Children, and Results for Development. Restaurants against hunger. Action Against Hunger partners with leaders from the food and beverage industry to bring attention to global hunger. Each year, several campaigns are run by the network to raise funds and support the organisation's programs : Restaurants Against Hunger and Love Food Give Food. Countries of intervention. In 2019, Action Against Hunger International Network is present in 51 countries: Africa. Burkina Faso, Burundi, Cameroon, Ivory Coast, Djibouti, Ethiopia, Kenya, Liberia, Malawi, Madagascar, Mali, Mauritania, Niger, Nigeria, Uganda, Central African Republic, Democratic Republic of the Congo, Senegal, Sierra Leone, Somalia, South Soudan, Tanzania, Chad, Zimbabwe Asia. Bangladesh, Myanmar, Cambodia, India, Indonesia, Mongolia, Nepal, Pakistan, Philippines, South Caucasus Caribbean. Haïti Europe. Turkey, Ukraine Middle East. Afghanistan, Azerbaijan, Egypt, Lebanon, Syria, Palestinian Occupied Territories, Yemen, Jordan, Iraq Latin America. Colombia, Guatemala, Nicaragua, Paraguay, Peru, Honduras Action Against Hunger international network. Since 1995 Action Against Hunger developed an international network to have a bigger global impact. The Network has 6 headquarters in the world: France, Spain, the United Kingdom, the United States, Canada and Italy. Action Against Hunger has also a West Africa Regional Office (WARO) located in Dakar, a training centre in Nairobi, and five logistic platforms (Lyon, Paris, Barcelona, Dubai, Panama). This network increases the human and financial capacities and enables specialisation per headquarter. See also. 2006 Trincomalee massacre of NGO workers
2452
AW
A&W, AW, Aw, aW or aw may refer to:
2457
Apoptosis
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, approximately twenty to thirty billion cells die per day. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the "intrinsic pathway" the cell kills itself because it senses cell stress, while in the "extrinsic pathway" the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis. Discovery and etymology. German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at University of Aberdeen. In 1972, the trio published a seminal article in the "British Journal of Cancer". Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called "apoptosis". Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz. For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death. The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode "C. elegans" and homologues of these genes function in humans to regulate apoptosis. In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second "p" silent ( ) and the second "p" pronounced (). In English, the "p" of the Greek "-pt-" consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation: We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid. Activation mechanisms. The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The "intrinsic pathway" is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The "extrinsic pathway" is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC). A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis. Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain. Intrinsic pathway. The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. They are very closely related to intrinsic pathway, and tumors arise more frequently through intrinsic pathway than the extrinsic pathway because of sensitivity. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis. During apoptosis, cytochrome "c" is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome "c" is released it binds with Apoptotic protease activating factor – 1 ("Apaf-1") and ATP, which then bind to "pro-caspase-9" to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector "caspase-3". Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to "proteins that inhibit apoptosis" (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Extrinsic pathway. Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the "TNF-induced" (tumor necrosis factor) model and the "Fas-Fas ligand-mediated" model, both involving receptors of the "TNF receptor" (TNFR) family coupled to extrinsic signals. TNF pathway. TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the proteinTRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis. Fas pathway. The fas receptor (First apoptosis signal) – (also known as "Apo-1" or "CD95") is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the "death-inducing signaling complex" (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the "Fas"-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8. Common components. Following "TNF-R1" and "Fas" activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic ("Bcl-Xl" and "Bcl-2") members of the "Bcl-2" family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family. Caspases. Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases, caspase 2,8,9,10,11,12, and effector caspases, caspase 3,6,7. The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program. Caspase-independent apoptotic pathway. There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). Apoptosis model in amphibians. The frog "Xenopus laevis" serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog. Negative regulators of apoptosis. Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB. Proteolytic caspase cascade: Killing the cell. Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis. A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include: Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Apoptotic cell disassembly. Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly: Removal of dead cells. The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation. Pathway knock-outs. Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene . This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels . Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis . Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain . In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons . The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist. Methods for distinguishing apoptotic from necrotic cells. Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references. Implication in disease. Defective pathways. The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased. A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The "X-linked inhibitor of apoptosis protein" (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis. Dysregulation of p53. The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the "p53" gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the "p53" or interferon genes will result in impaired apoptosis and the possible formation of tumors. Inhibition. Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis". HeLa cell. Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Treatments. The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway. Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis. Hyperactive apoptosis. On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated. At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM. Treatments. Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type. HIV progression. The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways: Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200. Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV. Viral infection. Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells. Viruses can trigger apoptosis of infected cells via a range of mechanisms including: Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro. Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade. The Oropouche virus (OROV) is found in the family "Bunyaviridae". The study of apoptosis brought on by "Bunyaviridae" was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice. OROV is a disease that is transmitted between humans by the biting midge ("Culicoides paraensis"). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever. The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected. With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway. In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria. Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function. Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the "apoptotic bodies" that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Plants. Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name "apoptosis" (as opposed to the more general "programmed cell death") is unclear. Caspase-independent apoptosis. The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
2459
Appomattox
Appomattox, shorthand for the surrender of Robert E. Lee to Ulysses S. Grant in the American Civil War, may refer to: Appomattox may also refer to:
2460
Anal sex
Anal sex or anal intercourse is generally the insertion and thrusting of the erect penis into a person's anus, or anus and rectum, for sexual pleasure. Other forms of anal sex include fingering, the use of sex toys, anilingus, and pegging. Although "anal sex" most commonly means penileanal penetration, sources sometimes use "anal intercourse" to exclusively denote penileanal penetration, and "anal sex" to denote any form of anal sexual activity, especially between pairings as opposed to anal masturbation. While anal sex is commonly associated with male homosexuality, research shows that not all gay men engage in anal sex and that it is not uncommon in heterosexual relationships. Types of anal sex can also be a part of lesbian sexual practices. People may experience pleasure from anal sex by stimulation of the anal nerve endings, and orgasm may be achieved through anal penetration – by indirect stimulation of the prostate in men, indirect stimulation of the clitoris or an area of the vagina (sometimes called "the G-spot") in women, and other sensory nerves (especially the pudendal nerve). However, people may also find anal sex painful, sometimes extremely so, which may be due to psychological factors in some cases. As with most forms of sexual activity, anal sex participants risk contracting sexually transmitted infections (STIs). Anal sex is considered a high-risk sexual practice because of the vulnerability of the anus and rectum. The anal and rectal tissue are delicate and do not provide lubrication like the vagina does, so they can easily tear and permit disease transmission, especially if a personal lubricant is not used. Anal sex without protection of a condom is considered the riskiest form of sexual activity, and therefore health authorities such as the World Health Organization (WHO) recommend safe sex practices for anal sex. Strong views are often expressed about anal sex. It is controversial in various cultures, especially with regard to religious prohibitions. This is commonly due to prohibitions against anal sex among males or teachings about the procreative purpose of sexual activity. It may be considered taboo or unnatural, and is a criminal offense in some countries, punishable by corporal or capital punishment. By contrast, anal sex may also be considered a natural and valid form of sexual activity as fulfilling as other desired sexual expressions, and can be an enhancing or primary element of a person's sex life. Anatomy and stimulation. The abundance of nerve endings in the anal region and rectum can make anal sex pleasurable for men or women. The internal and external sphincter muscles control the opening and closing of the anus; these muscles, which are sensitive membranes made up of many nerve endings, facilitate pleasure or pain during anal sex. "Human Sexuality: An Encyclopedia" states that "the inner third of the anal canal is less sensitive to touch than the outer two-thirds, but is more sensitive to pressure" and that "the rectum is a curved tube about eight or nine inches long and has the capacity, like the anus, to expand". Research indicates that anal sex occurs significantly less frequently than other sexual behaviors, but its association with dominance and submission, as well as taboo, makes it an appealing stimulus to people of all sexual orientations. In addition to sexual penetration by the penis, people may use sex toys such as a dildo, a butt plug or anal beads, engage in fingering, anilingus, pegging, anal masturbation or fisting for anal sexual activity, and different sex positions may also be included. Fisting is the least practiced of the activities, partly because it is uncommon that people can relax enough to accommodate an object as big as a fist being inserted into the anus. In a male receptive partner, being anally penetrated can produce a pleasurable sensation due to the object of insertion rubbing or brushing against the prostate through the anal wall. This can result in pleasurable sensations and can lead to an orgasm in some cases. Prostate stimulation can produce a deeper orgasm, sometimes described by men as more widespread and intense, longer-lasting, and allowing for greater feelings of ecstasy than orgasm elicited by penile stimulation only. The prostate is located next to the rectum and is the larger, more developed male homologue (variation) to the female Skene's glands. It is also typical for a man to not reach orgasm as a receptive partner solely from anal sex. General statistics indicate that 70–80% of women require direct clitoral stimulation to achieve orgasm. The vaginal walls contain significantly fewer nerve endings than the clitoris (which has many nerve endings specifically intended for orgasm), and therefore intense sexual pleasure, including orgasm, from vaginal sexual stimulation is less likely to occur than from direct clitoral stimulation in the majority of women. The clitoris is composed of more than the externally visible glans (head). The vagina, for example, is flanked on each side by the clitoral crura, the internal legs of the clitoris, which are highly sensitive and become engorged with blood when sexually aroused. Indirect stimulation of the clitoris through anal penetration may be caused by the shared sensory nerves, especially the pudendal nerve, which gives off the inferior anal nerves and divides into the perineal nerve and the dorsal nerve of the clitoris. Although the anus has many nerve endings, their purpose is not specifically for inducing orgasm, and so a woman achieving orgasm solely by anal stimulation is rare. The Gräfenberg spot, or G-spot, is a debated area of female anatomy, particularly among doctors and researchers, but it is typically described as being located behind the female pubic bone surrounding the urethra and accessible through the anterior wall of the vagina; it and other areas of the vagina are considered to have tissue and nerves that are related to the clitoris. Direct stimulation of the clitoris, a G-spot area, or both, while engaging in anal sex can help some women enjoy the activity and reach orgasm during it. Stimulation from anal sex can additionally be affected by popular perception or portrayals of the activity, such as erotica or pornography. In pornography, anal sex is commonly portrayed as a desirable, painless routine that does not require personal lubricant; this can result in couples performing anal sex without care, and men and women believing that it is unusual for women, as receptive partners, to find discomfort or pain instead of pleasure from the activity. By contrast, each person's sphincter muscles react to penetration differently, the anal sphincters have tissues that are more prone to tearing, and the anus and rectum do not provide lubrication for sexual penetration like the vagina does. Researchers say adequate application of a personal lubricant, relaxation, and communication between sexual partners are crucial to avoid pain or damage to the anus or rectum. Additionally, ensuring that the anal area is clean and the bowel is empty, for both aesthetics and practicality, may be desired by participants. Male to female. Behaviors and views. The anal sphincters are usually tighter than the pelvic muscles of the vagina, which can enhance the sexual pleasure for the inserting male during male-to-female anal intercourse because of the pressure applied to the penis. Men may also enjoy the penetrative role during anal sex because of its association with dominance, because it is made more alluring by a female partner or society in general insisting that it is forbidden, or because it presents an additional option for penetration. While some women find being a receptive partner during anal intercourse painful or uncomfortable, or only engage in the act to please a male sexual partner, other women find the activity pleasurable or prefer it to vaginal intercourse. In a 2010 clinical review article of heterosexual anal sex, "anal intercourse" is used to specifically denote penile-anal penetration, and "anal sex" is used to denote any form of anal sexual activity. The review suggests that anal sex is exotic among the sexual practices of some heterosexuals and that "for a certain number of heterosexuals, anal intercourse is pleasurable, exciting, and perhaps considered more intimate than vaginal sex". Anal intercourse is sometimes used as a substitute for vaginal intercourse during menstruation. The likelihood of pregnancy occurring during anal sex is greatly reduced, as anal sex alone cannot lead to pregnancy unless sperm is somehow transported to the vaginal opening. Because of this, some couples practice anal intercourse as a form of contraception, often in the absence of a condom. Male-to-female anal sex is commonly viewed as a way of preserving female virginity because it is non-procreative and does not tear the hymen; a person, especially a teenage girl or woman, who engages in anal sex or other sexual activity with no history of having engaged in vaginal intercourse is often regarded among heterosexuals and researchers as not having yet experienced virginity loss. This is sometimes called "technical virginity." Heterosexuals may view anal sex as "fooling around" or as foreplay; scholar Laura M. Carpenter stated that this view "dates to the late 1600s, with explicit 'rules' appearing around the turn of the twentieth century, as in marriage manuals defining petting as 'literally every caress known to married couples but does not include complete sexual intercourse.'" Prevalence. Because most research on anal intercourse addresses men who have sex with men, little data exists on the prevalence of anal intercourse among heterosexual couples. In Kimberly R. McBride's 2010 clinical review on heterosexual anal intercourse and other forms of anal sexual activity, it is suggested that changing norms may affect the frequency of heterosexual anal sex. McBride and her colleagues investigated the prevalence of non-intercourse anal sex behaviors among a sample of men (n=1,299) and women (n=1,919) compared to anal intercourse experience and found that 51% of men and 43% of women had participated in at least one act of oral–anal sex, manual–anal sex, or anal sex toy use. The report states the majority of men (n=631) and women (n=856) who reported heterosexual anal intercourse in the past 12 months were in exclusive, monogamous relationships: 69% and 73%, respectively. The review added that because "relatively little attention [is] given to anal intercourse and other anal sexual behaviors between heterosexual partners", this means that it is "quite rare" to have research "that specifically differentiates the anus as a sexual organ or addresses anal sexual function or dysfunction as legitimate topics. As a result, we do not know the extent to which anal intercourse differs qualitatively from coitus." According to a 2010 study from the National Survey of Sexual Health and Behavior (NSSHB) that was authored by Debby Herbenick et al., although anal intercourse is reported by fewer women than other partnered sex behaviors, partnered women in the age groups between 18 and 49 are significantly more likely to report having anal sex in the past 90 days. Women engaged in anal intercourse less commonly than men. Vaginal intercourse was practiced more than insertive anal intercourse among men, but 13% to 15% of men aged 25 to 49 practiced insertive anal intercourse. With regard to adolescents, limited data also exists. This may be because of the taboo nature of anal sex and that teenagers and caregivers subsequently avoid talking to one another about the topic. It is also common for subject review panels and schools to avoid the subject. A 2000 study found that 22.9% of college students who self-identified as non-virgins had anal sex. They used condoms during anal sex 20.9% of the time as compared with 42.9% of the time with vaginal intercourse. Anal sex being more common among heterosexuals today than it was previously has been linked to the increase in consumption of anal pornography among men, especially among those who view it on a regular basis. Seidman et al. argued that "cheap, accessible and, especially, interactive media have enabled many more people to produce as well as consume pornography", and that this modern way of producing pornography, in addition to the buttocks and anus having become more eroticized, has led to a significant interest in or obsession with anal sex among men. Male to male. Behaviors and views. Historically, anal sex has been commonly associated with male homosexuality. However, many gay men and men who have sex with men in general (those who identify as gay, bisexual, heterosexual or have not identified their sexual identity) do not engage in anal sex. Among men who have anal sex with other men, the insertive partner may be referred to as the "top" and the one being penetrated may be referred to as the "bottom". Those who enjoy either role may be referred to as "versatile". Gay men who prefer anal sex may view it as their version of intercourse and a natural expression of intimacy that is capable of providing pleasure. The notion that it might resonate with gay men with the same emotional significance that vaginal sex resonates with heterosexuals has also been considered. Some men who have sex with men, however, believe that being a receptive partner during anal sex questions their masculinity. Men who have sex with men may also prefer to engage in frot or other forms of mutual masturbation because they find it more pleasurable or more affectionate, to preserve technical virginity, or as safe sex alternatives to anal sex, while other frot advocates denounce anal sex as degrading to the receptive partner and unnecessarily risky. Prevalence. Reports regarding the prevalence of anal sex among gay men and other men who have sex with men vary. A survey in "The Advocate" in 1994 indicated that 46% of gay men preferred to penetrate their partners, while 43% preferred to be the receptive partner. Other sources suggest that roughly three-fourths of gay men have had anal sex at one time or another, with an equal percentage participating as tops and bottoms. A 2012 NSSHB sex survey in the U.S. suggests high lifetime participation in anal sex among gay men: 83.3% report ever taking part in anal sex in the insertive position and 90% in the receptive position, even if only between a third and a quarter self-report very recent engagement in the practice, defined as 30 days or less. Oral sex and mutual masturbation are more common than anal stimulation among men in sexual relationships with other men. According to Weiten et al., anal intercourse is generally more popular among gay male couples than among heterosexual couples, but "it ranks behind oral sex and mutual masturbation" among both sexual orientations in prevalence. Wellings et al. reported that "the equation of 'homosexual' with 'anal' sex among men is common among lay and health professionals alike" and that "yet an Internet survey of 180,000 MSM across Europe (EMIS, 2011) showed that oral sex was most commonly practised, followed by mutual masturbation, with anal intercourse in third place". Female to male. Women may sexually stimulate a man's anus by fingering the exterior or interior areas of the anus; they may also stimulate the perineum (which, for males, is between the base of the scrotum and the anus), massage the prostate or engage in anilingus. Sex toys, such as a dildo, may also be used. The practice of a woman penetrating a man's anus with a strap-on dildo for sexual activity is called pegging. It is common for heterosexual men to reject being receptive partners during anal sex because they believe it is a feminine act, can make them vulnerable, or contradicts their sexual orientation; they may believe being a receptive partner is indicative that they are gay. Reece et al. reported in 2010 that receptive anal intercourse is infrequent among men overall, stating that "an estimated 7% of men 14 to 94 years old reported being a receptive partner during anal intercourse". The "BMJ" stated in 1999: Female to female. With regard to lesbian sexual practices, anal sex includes fingering, use of a dildo or other sex toys, or anilingus. There is less research on anal sexual activity among women who have sex with women compared to couples of other sexual orientations. In 1987, a non-scientific study (Munson) was conducted of more than 100 members of a lesbian social organization in Colorado. When asked what techniques they used in their last ten sexual encounters, lesbians in their 30s were twice as likely as other age groups to engage in anal stimulation (with a finger or dildo). A 2014 study of partnered lesbian women in Canada and the U.S. found that 7% engaged in anal stimulation or penetration at least once a week; about 10% did so monthly and 70% did not at all. Anilingus is also less often practiced among female same-sex couples. Health risks. General risks. Anal sex can expose its participants to two principal dangers: infections due to the high number of infectious microorganisms not found elsewhere on the body, and physical damage to the anus and rectum due to their fragility. Unprotected penile-anal penetration, colloquially known as "barebacking", carries a higher risk of passing on sexually transmitted infections (STIs) because the anal sphincter is a delicate, easily torn tissue that can provide an entry for pathogens. Use of condoms, ample lubrication to reduce the risk of tearing, and safer sex practices in general, reduce the risk of STIs. However, a condom can break or otherwise come off during anal sex, and this is more likely to happen with anal sex than with other sex acts because of the tightness of the anal sphincters during friction. Unprotected receptive anal sex (with an HIV positive partner) is the sex act most likely to result in HIV transmission. Other infections that can be transmitted by unprotected anal sex are human papillomavirus (HPV) (which can increase risk of anal cancer); typhoid fever; amoebiasis; chlamydia; cryptosporidiosis; "E. coli" infections; giardiasis; gonorrhea; hepatitis A; hepatitis B; hepatitis C; herpes simplex; Kaposi's sarcoma-associated herpesvirus (HHV-8); lymphogranuloma venereum; "Mycoplasma hominis"; "Mycoplasma genitalium"; pubic lice; salmonellosis; shigella; syphilis; tuberculosis; and "Ureaplasma urealyticum". As with other sexual practices, people without sound knowledge about the sexual risks involved are susceptible to STIs. Because of the view that anal sex is not "real sex" and therefore does not result in virginity loss, or pregnancy, teenagers and other young people may consider vaginal intercourse riskier than anal intercourse and believe that a STI can only result from vaginal intercourse. It may be because of these views that condom use with anal sex is often reported to be low and inconsistent across all groups in various countries. Although anal sex alone does not lead to pregnancy, pregnancy can still occur with anal sex or other forms of sexual activity if the penis is near the vagina (such as during intercrural sex or other genital-genital rubbing) and its sperm is deposited near the vagina's entrance and travels along the vagina's lubricating fluids; the risk of pregnancy can also occur without the penis being near the vagina because sperm may be transported to the vaginal opening by the vagina coming in contact with fingers or other non-genital body parts that have come in contact with semen. There are a variety of factors that make male-to-female anal intercourse riskier than vaginal intercourse for women, including the risk of HIV transmission being higher for anal intercourse than for vaginal intercourse. The risk of injury to the woman during anal intercourse is also significantly higher than the risk of injury to her during vaginal intercourse because of the durability of the vaginal tissues compared to the anal tissues. Additionally, if a man moves from anal intercourse immediately to vaginal intercourse without a condom or without changing it, infections can arise in the vagina (or urinary tract) due to bacteria present within the anus; these infections can also result from switching between vaginal sex and anal sex by the use of fingers or sex toys. Pain during receptive anal sex among gay men (or men who have sex with men) is formally known as "anodyspareunia." In one study, 61% of gay or bisexual men said they experienced painful receptive anal sex and that it was the most frequent sexual difficulty they had experienced. By contrast, 24% of gay or bisexual men stated that they always experienced some degree of pain during anal sex, and about 12% of gay men find it too painful to pursue receptive anal sex; it was concluded that the perception of anal sex as painful is as likely to be psychologically or emotionally based as it is to be physically based. Factors predictive of pain during anal sex include inadequate lubrication, feeling tense or anxious, lack of stimulation, as well as lack of social ease with being gay and being closeted. Research has found that psychological factors can in fact be the primary contributors to the experience of pain during anal intercourse and that adequate communication between sexual partners can prevent it, countering the notion that pain is always inevitable during anal sex. Unprotected anal sex is a risk factor for formation of antisperm antibodies (ASA) in the recipient. In some people, ASA may cause autoimmune infertility. Antisperm antibodies impair fertilization, negatively affect the implantation process and impair growth and development of the embryo. Physical damage: hemorrhoids and rectal prolapse. Anal sex can exacerbate hemorrhoids and therefore result in bleeding; in other cases, the formation of a hemorrhoid is attributed to anal sex. If bleeding occurs as a result of anal sex, it may also be because of a tear in the anal or rectal tissues (an anal fissure) or perforation (a hole) in the colon, the latter of which being a serious medical issue that should be remedied by immediate medical attention. Because of the rectum's lack of elasticity, the anal mucous membrane being thin, and small blood vessels being present directly beneath the mucous membrane, tiny tears and bleeding in the rectum usually result from penetrative anal sex, though the bleeding is usually minor and therefore usually not visible. By contrast to other anal sexual behaviors, anal fisting poses a more serious danger of damage due to the deliberate stretching of the anal and rectal tissues; anal fisting injuries include anal sphincter lacerations and rectal and sigmoid colon (rectosigmoid) perforation, which might result in death. Repetitive penetrative anal sex may result in the anal sphincters becoming weakened, which may cause rectal prolapse or affect the ability to hold in feces (a condition known as fecal incontinence). Rectal prolapse is relatively uncommon, however, especially in men, and its causes are not well understood. Kegel exercises have been used to strengthen the anal sphincters and overall pelvic floor, and may help prevent or remedy fecal incontinence. Cancer. Most cases of anal cancer are related to infection with the human papilloma virus (HPV). Anal sex alone does not cause anal cancer; the risk of anal cancer through anal sex is attributed to HPV infection, which is often contracted through unprotected anal sex. Anal cancer is relatively rare, and significantly less common than cancer of the colon or rectum (colorectal cancer); the American Cancer Society states that it affects approximately 7,060 people (4,430 in women and 2,630 in men) and results in approximately 880 deaths (550 in women and 330 in men) in the United States, and that, though anal cancer has been on the rise for many years, it is mainly diagnosed in adults, "with an average age being in the early 60s" and it "affects women somewhat more often than men." Though anal cancer is serious, treatment for it is "often very effective" and most anal cancer patients can be cured of the disease; the American Cancer Society adds that "receptive anal intercourse also increases the risk of anal cancer in both men and women, particularly in those younger than the age of 30. Because of this, men who have sex with men have a high risk of this cancer." Cultural views. General. Different cultures have had different views on anal sex throughout human history, with some cultures more positive about the activity than others. Historically, anal sex has been restricted or condemned, especially with regard to religious beliefs; it has also commonly been used as a form of domination, usually with the active partner (the one who is penetrating) representing masculinity and the passive partner (the one who is being penetrated) representing femininity. A number of cultures have especially recorded the practice of anal sex between males, and anal sex between males has been especially stigmatized or punished. In some societies, if discovered to have engaged in the practice, the individuals involved were put to death, such as by decapitation, burning, or even mutilation. Anal sex has been more accepted in modern times; it is often considered a natural, pleasurable form of sexual expression. Some people, men in particular, are only interested in anal sex for sexual satisfaction, which has been partly attributed to the buttocks and anus being more eroticized in modern culture, including via pornography. Engaging in anal sex is still, however, punished in some societies. For example, regarding LGBT rights in Iran, Iran's Penal Code states in Article 109 that "both men involved in same-sex penetrative (anal) or non-penetrative sex will be punished" and "Article 110 states that those convicted of engaging in anal sex will be executed and that the manner of execution is at the discretion of the judge". Ancient and non-Western cultures. From the earliest records, the ancient Sumerians had very relaxed attitudes toward sex and did not regard anal sex as taboo. priestesses were forbidden from producing offspring and frequently engaged in anal sex as a method of birth control. Anal sex is also obliquely alluded to by a description of an omen in which a man "keeps saying to his wife: 'Bring your backside. Other Sumerian texts refer to homosexual anal intercourse. The , a set of priests who worked in the temples of the goddess Inanna, where they performed elegies and lamentations, were especially known for their homosexual proclivities. The Sumerian sign for was a ligature of the signs for 'penis' and 'anus'. One Sumerian proverb reads: "When the wiped off his ass [he said], 'I must not arouse that which belongs to my mistress [i.e., Inanna].'" The term "Greek love" has long been used to refer to anal intercourse, and in modern times, "doing it the Greek way" is sometimes used as slang for anal sex. Male-male anal sex was not a universally accepted practice in Ancient Greece; it was the target of jokes in some Athenian comedies. Aristophanes, for instance, mockingly alludes to the practice, claiming, "Most citizens are ('wide-arsed') now." The terms , , and were used by Greek residents to categorize men who chronically practiced passive anal intercourse. Pederastic practices in ancient Greece (sexual activity between men and adolescent boys), at least in Athens and Sparta, were expected to avoid penetrative sex of any kind. Greek artwork of sexual interaction between men and boys usually depicted fondling or intercrural sex, which was not condemned for violating or feminizing boys, while male-male anal intercourse was usually depicted between males of the same age-group. Intercrural sex was not considered penetrative and two males engaging in it was considered a "clean" act. Some sources explicitly state that anal sex between men and boys was criticized as shameful and seen as a form of hubris. Evidence suggests, however, that the younger partner in pederastic relationships (i.e., the ) did engage in receptive anal intercourse so long as no one accused him of being 'feminine'. In later Roman-era Greek poetry, anal sex became a common literary convention, represented as taking place with "eligible" youths: those who had attained the proper age but had not yet become adults. Seducing those not of proper age (for example, non-adolescent children) into the practice was considered very shameful for the adult, and having such relations with a male who was no longer adolescent was considered more shameful for the young male than for the one mounting him. Greek courtesans, or hetaerae, are said to have frequently practiced male-female anal intercourse as a means of preventing pregnancy. A male citizen taking the passive (or receptive) role in anal intercourse ( in Latin) was condemned in Rome as an act of ('immodesty' or 'unchastity'); free men, however, could take the active role with a young male slave, known as a or . The latter was allowed because anal intercourse was considered equivalent to vaginal intercourse in this way; men were said to "take it like a woman" ( 'to undergo womanly things') when they were anally penetrated, but when a man performed anal sex on a woman, she was thought of as playing the boy's role. Likewise, women were believed to only be capable of anal sex or other sex acts with women if they possessed an exceptionally large clitoris or a dildo. The passive partner in any of these cases was always considered a woman or a boy because being the one who penetrates was characterized as the only appropriate way for an adult male citizen to engage in sexual activity, and he was therefore considered unmanly if he was the one who was penetrated; slaves could be considered "non-citizen". Although Roman men often availed themselves of their own slaves or others for anal intercourse, Roman comedies and plays presented Greek settings and characters for explicit acts of anal intercourse, and this may be indicative that the Romans thought of anal sex as something specifically "Greek". In Japan, records (including detailed shunga) show that some males engaged in penetrative anal intercourse with males. Evidence suggestive of widespread male-female anal intercourse in a pre-modern culture can be found in the erotic vases, or stirrup-spout pots, made by the Moche people of Peru; in a survey, of a collection of these pots, it was found that 31 percent of them depicted male-female anal intercourse significantly more than any other sex act. Moche pottery of this type belonged to the world of the dead, which was believed to be a reversal of life. Therefore, the reverse of common practices was often portrayed. The Larco Museum houses an erotic gallery in which this pottery is showcased. Western cultures. In many Western countries, anal sex has generally been taboo since the Middle Ages, when heretical movements were sometimes attacked by accusations that their members practiced anal sex among themselves. At that time, celibate members of the Christian clergy were accused of engaging in "sins against nature", including anal sex. The term "buggery" originated in medieval Europe as an insult used to describe the rumored same-sex sexual practices of the heretics from a sect originating in Bulgaria, where its followers were called ; when they spread out of the country, they were called "buggres" (from the ethnonym "Bulgars"). Another term for the practice, more archaic, is "pedicate" from the Latin , with the same meaning. The Renaissance poet Pietro Aretino advocated anal sex in his ('Lust Sonnets'). While men who engaged in homosexual relationships were generally suspected of engaging in anal sex, many such individuals did not. Among these, in recent times, have been André Gide, who found it repulsive, and Noël Coward, who had a horror of disease, and asserted when young that "I'd never do anything – well the disgusting thing they do – because I know I could get something wrong with me". During the 1980s, Margaret Thatcher questioned the inclusion of "risky sex" in the United Kingdom's AIDS related government advertisements. Thatcher questioned the inclusion of the term "anal sex" in line with the Obscene Publications Act 1959. The term "rectal sex" was agreed upon to be used instead. Religion. Judaism. The "Mishneh Torah", a text considered authoritative by Orthodox Jewish sects, states "since a man's wife is permitted to him, he may act with her in any manner whatsoever. He may have intercourse with her whenever he so desires and kiss any organ of her body he wishes, and he may have intercourse with her naturally or unnaturally [traditionally, "unnaturally" refers to anal and oral sex], provided that he does not expend semen to no purpose. Nevertheless, it is an attribute of piety that a man should not act in this matter with levity and that he should sanctify himself at the time of intercourse." Christianity. Christian texts may sometimes euphemistically refer to anal sex as the ('the sin against nature', after Thomas Aquinas) or ('sodomitical lusts', in one of Charlemagne's ordinances), or ('that horrible sin that among Christians is not to be named'). Islam. , or the sin of Lot's people, which has come to be interpreted as referring generally to same-sex sexual activity, is commonly officially prohibited by Islamic sects; there are parts of the Quran which talk about smiting on Sodom and Gomorrah, and this is thought to be a reference to "unnatural" sex, and so there are hadith and Islamic laws which prohibit it. While, concerning Islamic belief, it is objectionable to use the words and to refer to homosexuality because it is blasphemy toward the prophet of Allah, and therefore the terms "sodomy" and "homosexuality" are preferred, same-sex male practitioners of anal sex are called "luti" or "lutiyin" in plural and are seen as criminals in the same way that a thief is a criminal. Other animals. As a form of non-reproductive sexual behavior in animals, anal sex has been observed in a few other primates, both in captivity and in the wild.
2466
Aarau
Aarau (, ) is a town, a municipality, and the capital of the northern Swiss canton of Aargau. The town is also the capital of the district of Aarau. It is German-speaking and predominantly Protestant. Aarau is situated on the Swiss plateau, in the valley of the Aare, on the river's right bank, and at the southern foot of the Jura Mountains, and is west of Zürich, south of Basel and northeast of Bern. The municipality borders directly on the canton of Solothurn to the west. It is the largest town in Aargau. At the beginning of 2010 Rohr became a district of Aarau. The official language of Aarau is (the Swiss variety of Standard) German, but the main spoken language is the local variant of the Alemannic Swiss German dialect. Geography and geology. The old city of Aarau is situated on a rocky outcrop at a narrowing of the Aare river valley, at the southern foot of the Jura mountains. Newer districts of the city lie to the south and east of the outcrop, as well as higher up the mountain, and in the valley on both sides of the Aare. The neighboring municipalities are Küttigen to the north and Buchs to the east, Suhr to the south-east, Unterentfelden to the south, and Eppenberg-Wöschnau and Erlinsbach to the west. Aarau and the nearby neighboring municipalities have grown together and now form an interconnected agglomeration. The only exception is Unterentfelden whose settlements are divided from Aarau by the extensive forests of Gönhard and Zelgli. Approximately nine-tenths of the city is south of the Aare, and one tenth is to the north. It has an area, , of . Of this area, 6.3% is used for agricultural purposes, while 34% is forested. Of the rest of the land, 55.2% is settled (buildings or roads) and the remainder (4.5%) is non-productive (rivers or lakes). The lowest elevation, , is found at the banks of the Aar, and the highest elevation, at , is the Hungerberg on the border with Küttigen. History. Prehistory. A few artifacts from the Neolithic period were found in Aarau. Near the location of the present train station, the ruins of a settlement from the Bronze Age (about 1000 BC) have been excavated. The Roman road between Salodurum (Solothurn) and Vindonissa passed through the area, along the route now covered by the Bahnhofstrasse. In 1976 divers in the Aare found part of a seven-meter wide wooden bridge from the late Roman times. Middle Ages. Aarau was founded around AD 1472 by the counts of Kyburg. Aarau is first mentioned in 1248 as "Arowe". Around 1250 it was mentioned as "Arowa". However the first mention of a city sized settlement was in 1256. The town was ruled from the "Rore" tower, which has been incorporated into the modern city hall. In 1273 the counts of Kyburg died out. Agnes of Kyburg, who had no male relations, sold the family's lands to King Rudolf I von Habsburg. He granted Aarau its city rights in 1283. In the 14th century the city was expanded in two stages, and a second defensive wall was constructed. A deep ditch separated the city from its "suburb;" its location is today marked by a wide street named "Graben" (meaning Ditch). In 1415 Bern invaded lower Aargau with the help of Solothurn. Aarau capitulated after a short resistance, and was forced to swear allegiance to the new rulers. In the 16th century, the rights of the lower classes were abolished. In March 1528 the citizens of Aarau allowed the introduction of Protestantism at the urging of the Bernese. A growth in population during the 16th Century led to taller buildings and denser construction methods. Early forms of industry developed at this time; however, unlike in other cities, no guilds were formed in Aarau. On 11 August 1712, the Peace of Aarau was signed into effect. This granted each canton the right to choose their own religion thereby ending Catholicism's control. Starting in the early 18th century, the textile industry was established in Aarau. German immigration contributed to the city's favorable conditions, in that they introduced the cotton and silk factories. These highly educated immigrants were also responsible for educational reform and the enlightened, revolutionary spirit that developed in Aarau. 1798: Capital of the Helvetic Republic. On 27 December 1797, the last Tagsatzung of the Old Swiss Confederacy was held in Aarau. Two weeks later a French envoy continued to foment the revolutionary opinions of the city. The contrast between a high level of education and a low level of political rights was particularly great in Aarau, and the city refused to send troops to defend the Bernese border. By Mid-March 1798 Aarau was occupied by French troops. On 22 March 1798 Aarau was declared the capital of the Helvetic Republic. It is therefore the first capital of a unified Switzerland. Parliament met in the city hall. On 20 September, the capital was moved to Lucerne. Aarau as canton capital. In 1803, Napoleon ordered the fusion of the cantons of Aargau, Baden and Fricktal. Aarau was declared the capital of the new, enlarged canton of Aargau. In 1820 the city wall was torn down, with the exception of the individual towers and gates, and the defensive ditches were filled in. The wooden bridge, dating from the Middle Ages, across the Aare was destroyed by floods three times in thirty years, and was replaced with a steel suspension bridge in 1851. This was replaced by a concrete bridge in 1952. The city was linked up to the Swiss Central Railway in 1856. The textile industry in Aarau broke down in about 1850 because of the protectionist tariff policies of neighboring states. Other industries had developed by that time to replace it, including the production of mathematical instruments, shoes and cement. Beginning in 1900, numerous electrical enterprises developed. By the 1960s, more citizens worked in service industries or for the canton-level government than in manufacturing. During the 1980s many of the industries left Aarau completely. In 1802 the Canton School was established; it was the first non-parochial high school in Switzerland. It developed a good reputation, and was home to Nobel Prize winners Albert Einstein, Paul Karrer, and Werner Arber, as well as several Swiss politicians and authors. The purchase of a manuscript collection in 1803 laid the foundation for what would become the Cantonal Library, which contains a Bible annotated by Huldrych Zwingli, along with the manuscripts and incunabula. More newspapers developed in the city, maintaining the revolutionary atmosphere of Aarau. Beginning in 1820, Aarau has been a refuge for political refugees. The urban educational and cultural opportunities of Aarau were extended through numerous new institutions. A Theatre and Concert Hall was constructed in 1883, which was renovated and expanded in 1995–96. The Aargau Nature Museum opened in 1922. A former cloth warehouse was converted into a small theatre in 1974, and the alternative culture center KIFF (Culture in the fodder factory) was established in a former animal fodder factory. Origin of the name. The earliest use of the place name was in 1248 (in the form Arowe), and probably referred to the settlement in the area before the founding of the city. It comes, along with the name of the River Aare (which was called Arula, Arola, and Araris in early times), from the German word "Au", meaning floodplain. Old town. The historic old town forms an irregular square, consisting of four parts (called "Stöcke"). To the south lies the Laurenzenvorstadt, that is, the part of the town formerly outside the city wall. One characteristic of the city is its painted gables, for which Aarau is sometimes called the "City of beautiful Gables". The old town, Laurenzenvorstadt, government building, cantonal library, state archive and art museum are all listed as heritage sites of national significance. The buildings in the old city originate, on the whole, from building projects during the 16th century, when nearly all the Middle Age period buildings were replaced or expanded. The architectural development of the city ended in the 18th century, when the city began to expand beyond its (still existing) wall. Most of the buildings in the "suburb" date from this time. The "Schlössli" (small Castle), Rore Tower and the upper gate tower have remained nearly unchanged since the 13th century. The "Schlössli" is the oldest building in the city. It was already founded at the time of the establishment of the city shortly after 1200; the exact date is not known. City hall was built around Rore Tower in 1515. The upper gate tower stands beside the southern gate in the city wall, along the road to Lucerne and Bern. The jail has been housed in it since the Middle Ages. A Carillon was installed in the tower in the middle of the 20th century, the bells for which were provided by the centuries-old bell manufacturers of Aarau. The town church was built between 1471 and 1478. During the Reformation, in 1528, its twelve altars and accompanying pictures were destroyed. The "Justice fountain" (Gerechtigkeitsbrunnen) was built in 1634, and is made of French limestone; it includes a statue of Lady Justice made of sandstone, hence the name. It was originally in the street in front of city hall, but was moved to its present location in front of the town church in 1905 due to increased traffic. Economy. , Aarau had an unemployment rate of 2.35%. , there were 48 people employed in the primary economic sector and about 9 businesses involved in this sector. 4,181 people are employed in the secondary sector and there are 164 businesses in this sector. 20,186 people are employed in the tertiary sector, with 1,461 businesses in this sector. This is a total of over 24,000 jobs, since Aarau's population is about 16,000 it draws workers from many surrounding communities. there were 8,050 total workers who lived in the municipality. Of these, 4,308 or about 53.5% of the residents worked outside Aarau while 17,419 people commuted into the municipality for work. There were a total of 21,161 jobs (of at least 6 hours per week) in the municipality. The largest employer in Aarau is the cantonal government, the offices of which are distributed across the entire city at numerous locations. One of the two head offices of the "Aargauer Zeitung", Switzerland's fifth largest newspaper, is located in Aarau, as are the Tele M1 television channel studios, and several radio stations. Kern & Co., founded in 1819, was an internationally known geodetic instrument manufacturer based in Aarau. However, it was taken over by Wild Leitz in 1988, and was closed in 1991. The small scale of Aarau causes it to continually expand the borders of its growth. The urban center lies in the middle of the "Golden Triangle" between Zürich, Bern, and Basel, and Aarau is having increasing difficulty in maintaining the independence of its economic base from the neighboring large cities. The idea of merging Aarau with its neighboring suburbs has been recently discussed in the hope of arresting the slowly progressing losses. Manufacture include bells, mathematical instruments, electrical goods, cotton textiles, cutlery, chemicals, shoes, and other products. Aarau is famous for the quality of their instruments, cutlery and their bells. Markets and fairs. Every Saturday morning there is a vegetable market in the "Graben" at the edge of the Old City. It is supplied with regional products. In the last week of September the MAG (Market of Aarauer Tradesmen) takes place there, with regional companies selling their products. The "Rüeblimärt" is held in the same place on the first Wednesday in November, which is a Carrot fair. The Aarau fair is held at the ice skating rink during the Spring. Transport. Aarau railway station is a terminus of the S-Bahn Zürich on the line S3. The town is also served with public transport provided by Busbetrieb Aarau AG. Population. The population of Aarau grew continuously from 1800 until about 1960, when the city reached a peak population of 17,045, more than five times its population in 1800. However, since 1960 the population has fallen by 8%. There are three reasons for this population loss: firstly, since the completion of Telli (a large apartment complex), the city has not had any more considerable land developments. Secondly, the number of people per household has fallen; thus, the existing dwellings do not hold as many people. Thirdly, population growth was absorbed by neighboring municipalities in the regional urban area, and numerous citizens of Aarau moved into the countryside. This trend might have stopped since the turn of the 21st century. Existing industrial developments are being used for new purposes instead of standing empty. Aarau has a population (as of ) of . , 19.8% of the population was made up of foreign nationals. Over the last 10 years the population has grown at a rate of 1%. Most of the population () speaks German (84.5%), with Italian being second most common ( 3.3%) and Serbo-Croatian being third ( 2.9%). The age distribution, , in Aarau is; 1,296 children or 8.1% of the population are between 0 and 9 years old and 1,334 teenagers or 8.4% are between 10 and 19. Of the adult population, 2,520 people or 15.8% of the population are between 20 and 29 years old. 2,518 people or 15.8% are between 30 and 39, 2,320 people or 14.6% are between 40 and 49, and 1,987 people or 12.5% are between 50 and 59. The senior population distribution is 1,588 people or 10.0% of the population are between 60 and 69 years old, 1,219 people or 7.7% are between 70 and 79, there are 942 people or 5.9% who are between 80 and 89, and there are 180 people or 1.1% who are 90 and older. , there were 1,365 homes with 1 or 2 persons in the household, 3,845 homes with 3 or 4 persons in the household, and 2,119 homes with 5 or more persons in the household. The average number of people per household was 1.99 individuals. there were 1,594 single family homes (or 18.4% of the total) out of a total of 8,661 homes and apartments. In Aarau about 74.2% of the population (between age 25–64) have completed either non-mandatory upper secondary education or additional higher education (either university or a "Fachhochschule"). Of the school age population (), there are 861 students attending primary school, there are 280 students attending secondary school, there are 455 students attending tertiary or university level schooling, there are 35 students who are seeking a job after school in the municipality. Sport. The football club FC Aarau play in the Stadion Brügglifeld. From 1981 until 2010 they played in the top tier of the Swiss football league system when they were relegated to the Swiss Challenge League. In the 2013/2014 they climbed back to the highest tier only to be relegated again. In the 2016/17 season they will play in the Swiss Challenge League. They won the Swiss Cup in 1985 and were three times Swiss football champions, in 1912, in 1914 and in 1993. The Argovia Stars play in the MySports League, the third highest league of Swiss ice hockey. They play their home games in the 3,000-seat KeBa Aarau Arena. BC Alte Kanti Aarau plays in the Swiss Women's Basketball Championship, the country's top division. Sites. Heritage sites of national significance. Aarau is home to a number of sites that are listed as Swiss heritage sites of national significance. The list includes three churches; the Christian Catholic parish house, the Catholic parish house, and the Reformed "City Church". There are five government buildings on the list; the Cantonal Library, which contains many pieces important to the nation's history, and Art Gallery, the old Cantonal School, the Legislature, the Cantonal Administration building, and the archives. Three gardens or parks are on the list; "Garten Schmidlin", "Naturama Aargau" and the "Schlossgarten". The remaining four buildings on the list are; the former Rickenbach Factory, the Crematorium, the "Haus zum Erker" at Rathausgasse 10 and the "Restaurant Zunftstube" at Pelzgasse. Tourist sites. The Bally Shoe company has a unique shoe museum in the city. There is also the Trade Museum which contain stained glass windows from Muri Convent and paintings. Annual events. Each May, Aarau plays host to the annual Jazzaar Festival attracting the world's top jazz musicians. Religion. From the , 4,473 or 28.9% are Roman Catholic, while 6,738 or 43.6% belonged to the Swiss Reformed Church. Of the rest of the population, there are 51 individuals (or about 0.33% of the population) who belong to the Christian Catholic i.e. Old Catholic faith. Government. Legislative. In place of a town meeting, a town assembly ("Einwohnerrat") of 50 members is elected by the citizens, and follows the policy of proportional representation. It is responsible for approving tax levels, preparing the annual account, and the business report. In addition, it can issue regulations. The term of office is four years. In the last two elections the parties had the following representation: At the district level, some elements of the government remain a direct democracy. There are optional and obligatory referendums, and the population retains the right to establish an initiative. Executive. The executive authority is the town council ("Stadtrat"). The term of office is four years, and its members are elected by a plurality voting system. It leads and represents the municipality. It carries out the resolutions of the assembly, and those requested by the canton and national level governments. The seven members (and their party) are: National elections. In the 2007 federal election the most popular party was the SP which received 27.9% of the vote. The next three most popular parties were the SVP (22.1%), the FDP (17.5%) and the Green Party (11.8%). Coat of arms. The blazon of the municipal coat of arms is "Argent an Eagle displayed Sable beaked langued and membered Gules and a Chief of the last." International relations. Twin towns – sister cities. Aarau is twinned with:
2467
Aargau
Aargau ( , ), more formally the Canton of Aargau (; ; ; ), is one of the 26 cantons forming the Swiss Confederation. It is composed of eleven districts and its capital is Aarau. Aargau is one of the most northerly cantons of Switzerland. It is situated by the lower course of the Aare River, which is why the canton is called "Aar-gau" (meaning "Aare province"). It is one of the most densely populated regions of Switzerland. History. Early history. The area of Aargau and the surrounding areas were controlled by the Helvetians, a tribe of Celts, as far back as 200 BC. It was eventually occupied by the Romans and then by the 6th century, the Franks. The Romans built a major settlement called Vindonissa, near the present location of Brugg. Medieval Aargau. The reconstructed Old High German name of Aargau is "Argowe", first unambiguously attested (in the spelling "Argue") in 795. The term described a territory only loosely equivalent to that of the modern canton, including the region between Aare and Reuss rivers, including Pilatus and Napf, i.e. including parts of the modern cantons of Bern (Bernese Aargau, Emmental, parts of the Bernese Oberland), Solothurn, Basel-Landschaft, Lucerne, Obwalden and Nidwalden, but not the parts of the modern canton east of the Reuss (Baden District), which were part of Zürichgau. Within the Frankish Empire (8th to 10th centuries), the area was a disputed border region between the duchies of Alamannia and Burgundy. A line of the von Wetterau (Conradines) intermittently held the countship of Aargau from 750 until about 1030, when they lost it (having in the meantime taken the name von Tegerfelden). This division became the ill-defined (and sparsely settled) outer border of the early Holy Roman Empire at its formation in the second half of the 10th century. Most of the region came under the control of the ducal house of Zähringen and the comital houses of Habsburg and Kyburg by about 1200. In the second half of the 13th century, the territory became divided between the territories claimed by the imperial cities of Bern, Lucerne and Solothurn and the Swiss canton of Unterwalden. The remaining portion, largely corresponding to the modern canton of Aargau, remained under the control of the Habsburgs until the "conquest of Aargau" by the Old Swiss Confederacy in 1415. Habsburg Castle itself, the original seat of the House of Habsburg, was taken by Bern in April 1415. The Habsburgs had founded a number of monasteries (with some structures enduring, e.g., in Wettingen and Muri), the closing of which by the government in 1841 was a contributing factor to the outbreak of the Swiss civil war – the "Sonderbund War" – in 1847. Under the Swiss Confederation. When Frederick IV of Habsburg sided with Antipope John XXIII at the Council of Constance, Emperor Sigismund placed him under the Imperial ban. In July 1414, the Pope visited Bern and received assurances from them, that they would move against the Habsburgs. A few months later the Swiss Confederation denounced the Treaty of 1412. Shortly thereafter in 1415, Bern and the rest of the Swiss Confederation used the ban as a pretext to invade the Aargau. The Confederation was able to quickly conquer the towns of Aarau, Lenzburg, Brugg and Zofingen along with most of the Habsburg castles. Bern kept the southwest portion (Zofingen, Aarburg, Aarau, Lenzburg, and Brugg), northward to the confluence of the Aare and Reuss. The important city of Baden was taken by a united Swiss army and governed by all 8 members of the Confederation. Some districts, named the "Freie Ämter" ("free bailiwicks") – Mellingen, Muri, Villmergen, and Bremgarten, with the countship of Baden – were governed as "subject lands" by all or some of the Confederates. Shortly after the conquest of the Aargau by the Swiss, Frederick humbled himself to the Pope. The Pope reconciled with him and ordered all of the taken lands to be returned. The Swiss refused and years later after no serious attempts at re-acquisition, the Duke officially relinquished rights to the Swiss. Unteraargau or Berner Aargau. Bern's portion of the Aargau came to be known as the Unteraargau, though can also be called the Berner or Bernese Aargau. In 1514 Bern expanded north into the Jura and so came into possession of several strategically important mountain passes into the Austrian Fricktal. This land was added to the Unteraargau and was directly ruled from Bern. It was divided into seven rural bailiwicks and four administrative cities, Aarau, Zofingen, Lenzburg and Brugg. While the Habsburgs were driven out, many of their minor nobles were allowed to keep their lands and offices, though over time they lost power to the Bernese government. The bailiwick administration was based on a very small staff of officials, mostly made up of Bernese citizens, but with a few locals. When Bern converted during the Protestant Reformation in 1528, the Unteraargau also converted. At the beginning of the 16th century a number of anabaptists migrated into the upper Wynen and Rueder valleys from Zürich. Despite pressure from the Bernese authorities in the 16th and 17th centuries anabaptism never entirely disappeared from the Unteraargau. Bern used the Aargau bailiwicks mostly as a source of grain for the rest of the city-state. The administrative cities remained economically only of regional importance. However, in the 17th and 18th centuries Bern encouraged industrial development in Unteraargau and by the late 18th century it was the most industrialized region in the city-state. The high industrialization led to high population growth in the 18th century, for example between 1764 and 1798, the population grew by 35%, far more than in other parts of the canton. In 1870 the proportion of farmers in Aarau, Lenzburg, Kulm, and Zofingen districts was 34–40%, while in the other districts it was 46–57%. Freie Ämter. The rest of the Freie Ämter were collectively administered as subject territories by the rest of the Confederation. Muri "Amt" was assigned to Zürich, Lucerne, Schwyz, Unterwalden, Zug and Glarus, while the "Ämter" of Meienberg, Richensee and Villmergen were first given to Lucerne alone. The final boundary was set in 1425 by an arbitration tribunal and Lucerne had to give the three "Ämter" to be collectively ruled. The four "Ämter" were then consolidated under a single Confederation bailiff into what was known in the 15th century as the "Waggental" Bailiwick (). In the 16th century, it came to be known as the "Vogtei der Freien Ämter". While the "Freien Ämter" often had independent lower courts, they were forced to accept the Confederation's sovereignty. Finally, in 1532, the canton of Uri became part of the collective administration of the Freien Ämter. At the time of the Protestant Reformation, the majority of the Ämter converted to the new faith. In 1529, a wave of iconoclasm swept through the area and wiped away much of the old religion. After the defeat of Zürich in the second Battle of Kappel in 1531, the victorious five Catholic cantons marched their troops into the Freie Ämter and reconverted them to Catholicism. In the First War of Villmergen, in 1656, and the Toggenburg War (or Second War of Villmergen), in 1712, the Freie Ämter became the staging ground for the warring Reformed and Catholic armies. While the peace after the 1656 war did not change the status quo, the fourth Peace of Aarau in 1712 brought about a reorganization of power relations. The victory gave Zürich the opportunity to force the Catholic cantons out of the government in the county of Baden and the adjacent area of the Freie Ämter. The Freie Ämter were then divided in two by a line drawn from the gallows in Fahrwangen to the Oberlunkhofen church steeple. The northern part, the so-called Unteren Freie Ämter (lower Freie Ämter), which included the districts of Boswil (in part) and Hermetschwil and the Niederamt, were ruled by Zürich, Bern and Glarus. The southern part, the Oberen Freie Ämter (upper Freie Ämter), were ruled by the previous seven cantons but Bern was added to make an eighth. During the Helvetic Republic (1798–1803), the county of Baden, the Freie Ämter and the area known as the Kelleramt were combined into the canton of Baden. County of Baden. The County of Baden was a shared condominium of the entire Old Swiss Confederacy. After the Confederacy conquest in 1415, they retained much of the Habsburg legal structure, which caused a number of problems. The local nobility had the right to hold the low court in only about one fifth of the territory. There were over 30 different nobles who had the right to hold courts scattered around the surrounding lands. All these overlapping jurisdictions caused numerous conflicts, but gradually the Confederation was able to acquire these rights in the county. The cities of Baden, Bremgarten and Mellingen became the administrative centers and held the high courts. Together with the courts, the three administrative centers had considerable local autonomy, but were ruled by a governor who was appointed by the "Acht Orte" every two years. After the Protestant victory at the Second Battle of Villmergen, the administration of the County changed slightly. Instead of the "Acht Orte" appointing a bailiff together, Zürich and Bern each appointed the governor for 7 out of 16 years while Glarus appointed him for the remaining two years. The chaotic legal structure and fragmented land ownership combined with a tradition of dividing the land among all the heirs in an inheritance prevented any large scale reforms. The governor tried in the 18th century to reform and standardize laws and ownership across the county, but with limited success. With an ever-changing administration, the County lacked a coherent long-term economic policy or support for reforms. By the end of the 18th century there were no factories or mills and only a few small cottage industries along the border with Zürich. Road construction first became a priority after 1750, when Zürich and Bern began appointing a governor for seven years. During the Protestant Reformation, some of the municipalities converted to the new faith. However, starting in 1531, some of the old parishes were converted back to the old faith. The governors were appointed from both Catholic and Protestant cantons and since they changed every two years, neither faith gained a majority in the county. After the French invasion, on 19 March 1798, the governments of Zürich and Bern agreed to the creation of the short lived canton of Baden in the Helvetic Republic. With the Act of Mediation in 1803, the canton of Baden was dissolved. Portions of the lands of the former County of Baden now became the District of Baden in the newly created canton of Aargau. After World War II, this formerly agrarian region saw striking growth and became the district with the largest and densest population in the canton (110,000 in 1990, 715 persons per km2). Forming the canton of Aargau. The contemporary canton of Aargau was formed in 1803, a canton of the Swiss Confederation as a result of the Act of Mediation. It was a combination of three short-lived cantons of the Helvetic Republic: Aargau (1798–1803), Baden (1798–1803) and Fricktal (1802–1803). Its creation is therefore rooted in the Napoleonic era. In the year 2003, the canton of Aargau celebrated its 200th anniversary. French forces occupied the Aargau from 10 March to 18 April 1798; thereafter the Bernese portion became the canton of Aargau and the remainder formed the canton of Baden. Aborted plans to merge the two halves came in 1801 and 1802, and they were eventually united under the name Aargau, which was then admitted as a full member of the reconstituted Confederation following the Act of Mediation. Some parts of the canton of Baden at this point were transferred to other cantons: the "Amt" of Hitzkirch to Lucerne, whilst Hüttikon, Oetwil an der Limmat, Dietikon and Schlieren went to Zürich. In return, Lucerne's "Amt" of Merenschwand was transferred to Aargau (district of Muri). The Fricktal, ceded in 1802 by Austria via Napoleonic France to the Helvetic Republic, was briefly a separate canton of the Helvetic Republic (the canton of Fricktal) under a "Statthalter" ('Lieutenant'), but on 19 March 1803 (following the Act of Mediation) was incorporated into the canton of Aargau. The former cantons of Baden and Fricktal can still be identified with the contemporary districts – the canton of Baden is covered by the districts of Zurzach, Baden, Bremgarten, and Muri (albeit with the gains and losses of 1803 detailed above); the canton of Fricktal by the districts of Rheinfelden and Laufenburg (except for Hottwil which was transferred to that district in 2010). Chief magistracy. The chief magistracy of Aargau changed its style repeatedly: Jewish history in Aargau. In the 17th century, Aargau was the only federal condominium where Jews were tolerated. In 1774, they were restricted to just two towns, Endingen and Lengnau. While the rural upper class pressed incessantly for the expulsion the Jews, the financial interests of the authorities prevented it. They imposed special taxes on peddling and cattle trading, the primary Jewish professions. The Protestant occupiers also enjoyed the discomfort of the local Catholics by the presence of the Jewish community. The Jews were directly subordinate to the governor; from 1696, they were compelled to renew a letter of protection from him every 16 years. During this period, Jews and Christians were not allowed to live under the same roof, neither were Jews allowed to own land or houses. They were taxed at a much higher rate than others and, in 1712, the Lengnau community was "pillaged." In 1760, they were further restricted regarding marriages and procreation. An exorbitant tax was levied on marriage licenses; oftentimes, they were outright refused. This remained the case until the 19th century. In 1799, the Helvetic republic abolished all special tolls, and, in 1802, removed the poll tax. On 5 May 1809, they were declared citizens and given broad rights regarding trade and farming. They were still restricted to Endingen and Lengnau until 7 May 1846, when their right to move and reside freely within the canton of Aargau was granted. On 24 September 1856, the Swiss Federal Council granted them full political rights within Aargau, as well as broad business rights; however the majority Christian population did not fully abide by these new liberal laws. The time of 1860 saw the canton government voting to grant suffrage in all local rights and to give their communities autonomy. Before the law was enacted, it was however repealed due to vocal opposition led by the Ultramonte Party. Finally, the federal authorities in July 1863, granted all Jews full rights of citizens. However, they did not receive all of the rights in Endingen and Lengnau until a resolution of the Grand Council, on 15 May 1877, granted citizens' rights to the members of the Jewish communities of those places, giving them charters under the names of New Endingen and New Lengnau. The "Swiss Jewish Kulturverein" was instrumental in this fight from its founding in 1862 until it was dissolved 20 years later. During this period of diminished rights, they were not even allowed to bury their dead in Swiss soil and had to bury their dead on an island called "Judenäule" (Jews' Isle) on the Rhine near Waldshut. Beginning in 1603, the deceased Jews of the Surbtal communities were buried on the river island which was leased by the Jewish community. As the island was repeatedly flooded and devastated, in 1750 the Surbtal Jews asked the "Tagsatzung" to establish the Endingen cemetery in the vicinity of their communities. Geography. The capital of the canton is Aarau, which is located on its western border, on the Aare. The canton borders Germany (Baden-Württemberg) to the north, the Rhine forming the border. To the west lie the Swiss cantons of Basel-Landschaft, Solothurn and Bern; the canton of Lucerne lies south, and Zürich and Zug to the east. Its total area is . Besides the Rhine, it contains two large rivers, the Aare and the Reuss. The canton of Aargau is one of the least mountainous Swiss cantons, forming part of a great table-land, to the north of the Alps and the east of the Jura, above which rise low hills. The surface of the country is diversified with undulating tracts and well-wooded hills, alternating with fertile valleys watered mainly by the Aare and its tributaries. The valleys alternate with hills, many of which are wooded. Slightly over one-third of the canton is wooded (), while nearly half is used from farming (). or about 2.4% of the canton is considered unproductive, mostly lakes (notably Lake Hallwil) and streams. With a population density of 450/km2 (1,200/sq mi), the canton has a relatively high amount of land used for human development, with or about 15% of the canton developed for housing or transportation. It contains the hot sulphur springs of Baden and Schinznach-Bad, while at Rheinfelden there are very extensive saline springs. Just below Brugg the Reuss and the Limmat join the Aar, while around Brugg are the ruined castle of Habsburg, the old convent of Königsfelden (with fine painted medieval glass) and the remains of the Roman settlement of Vindonissa (Windisch). Fahr Monastery forms a small exclave of the canton, otherwise surrounded by the canton of Zürich, and since 2008 is part of the Aargau municipality of Würenlos. Political subdivisions. Districts. Aargau is divided into 11 districts: The most recent change in district boundaries occurred in 2010 when Hottwil transferred from Brugg to Laufenburg, following its merger with other municipalities, all of which were in Laufenburg. Municipalities. There are (as of 2014) 213 municipalities in the canton of Aargau. As with most Swiss cantons there has been a trend since the early 2000s for municipalities to merge, though mergers in Aargau have so far been less radical than in other cantons. Coat of arms. The blazon of the coat of arms is "Per pale, dexter: sable, a fess wavy argent, charged with two cotises wavy azure; sinister: sky blue, three mullets of five argent." The flag and arms of the canton of Aargau date to 1803 and are an original design by Samuel Ringier-Seelmatter; the current official design, specifying the stars as five-pointed, dates to 1930. Demographics. Aargau has a population () of . , 21.5% of the population are resident foreign nationals. Over the last 10 years (2000–2010) the population has changed at a rate of 11%. Migration accounted for 8.7%, while births and deaths accounted for 2.8%. Most of the population () speaks German (477,093 or 87.1%) as their first language, Italian is the second most common (17,847 or 3.3%) and Serbo-Croatian is the third (10,645 or 1.9%). There are 4,151 people who speak French and 618 people who speak Romansh. Of the population in the canton, 146,421 or about 26.7% were born in Aargau and lived there in 2000. There were 140,768 or 25.7% who were born in the same canton, while 136,865 or 25.0% were born somewhere else in Switzerland, and 107,396 or 19.6% were born outside of Switzerland. , children and teenagers (0–19 years old) make up 24.3% of the population, while adults (20–64 years old) make up 62.3% and seniors (over 64 years old) make up 13.4%. , there were 227,656 people who were single and never married in the canton. There were 264,939 married individuals, 27,603 widows or widowers and 27,295 individuals who are divorced. , there were 224,128 private households in the canton, and an average of 2.4 persons per household. There were 69,062 households that consist of only one person and 16,254 households with five or more people. , the construction rate of new housing units was 6.5 new units per 1000 residents. The vacancy rate for the canton, , was 1.54%. The majority of the population is centered on one of three areas: the Aare Valley, the side branches of the Aare Valley, or along the Rhine. Historic population. The historical population is given in the following chart: Politics. In the 2011 federal election, the most popular party was the SVP which received 34.7% of the vote. The next three most popular parties were the SP/PS (18.0%), the FDP (11.5%) and the CVP (10.6%). The SVP received about the same percentage of the vote as they did in the 2007 Federal election (36.2% in 2007 vs 34.7% in 2011). The SPS retained about the same popularity (17.9% in 2007), the FDP retained about the same popularity (13.6% in 2007) and the CVP retained about the same popularity (13.5% in 2007). Cantonal politics. The Grand Council of the canton of Aargau is called Grosser Rat. It is the legislature of the canton, has 140 seats, with members elected every four years. Religion. From the , 219,800 or 40.1% were Roman Catholic, while 189,606 or 34.6% belonged to the Swiss Reformed Church. Of the rest of the population, there were 11,523 members of an Orthodox church (or about 2.10% of the population), there were 3,418 individuals (or about 0.62% of the population) who belonged to the Christian Catholic Church, and there were 29,580 individuals (or about 5.40% of the population) who belonged to another Christian church. There were 342 individuals (or about 0.06% of the population) who were Jewish, and 30,072 (or about 5.49% of the population) who were Muslim. There were 1,463 individuals who were Buddhist, 2,089 individuals who were Hindu and 495 individuals who belonged to another church. 57,573 (or about 10.52% of the population) belonged to no church, are agnostic or atheist, and 15,875 individuals (or about 2.90% of the population) did not answer the question. Education. In Aargau about 212,069 or (38.7%) of the population have completed non-mandatory upper secondary education, and 70,896 or (12.9%) have completed additional higher education (either university or a "Fachhochschule"). Of the 70,896 who completed tertiary schooling, 63.6% were Swiss men, 20.9% were Swiss women, 10.4% were non-Swiss men and 5.2% were non-Swiss women. Economy. , Aargau had an unemployment rate of 3.6%. , there were 11,436 people employed in the primary economic sector and about 3,927 businesses involved in this sector. 95,844 people were employed in the secondary sector and there were 6,055 businesses in this sector. 177,782 people were employed in the tertiary sector, with 21,530 businesses in this sector. the total number of full-time equivalent jobs was 238,225. The number of jobs in the primary sector was 7,167, of which 6,731 were in agriculture, 418 were in forestry or lumber production and 18 were in fishing or fisheries. The number of jobs in the secondary sector was 90,274 of which 64,089 or (71.0%) were in manufacturing, 366 or (0.4%) were in mining and 21,705 (24.0%) were in construction. The number of jobs in the tertiary sector was 140,784. In the tertiary sector; 38,793 or 27.6% were in the sale or repair of motor vehicles, 13,624 or 9.7% were in the movement and storage of goods, 8,150 or 5.8% were in a hotel or restaurant, 5,164 or 3.7% were in the information industry, 5,946 or 4.2% were the insurance or financial industry, 14,831 or 10.5% were technical professionals or scientists, 10,951 or 7.8% were in education and 21,952 or 15.6% were in health care. Of the working population, 19.5% used public transportation to get to work, and 55.3% used a private car. Public transportation – bus and train – is provided by Busbetrieb Aarau AG. The farmland of the canton of Aargau is some of the most fertile in Switzerland. Dairy farming, cereal and fruit farming are among the canton's main economic activities. The canton is also industrially developed, particularly in the fields of electrical engineering, precision instruments, iron, steel, cement and textiles. Three of Switzerland's five nuclear power plants are in the canton of Aargau (Beznau I + II and Leibstadt). Additionally, the many rivers supply enough water for numerous hydroelectric power plants throughout the canton. The canton of Aargau is often called "the energy canton". A significant number of people commute into the financial center of the city of Zürich, which is just across the cantonal border. As such the per capita cantonal income (in 2005) is 49,209 CHF. Tourism is significant, particularly for the hot springs at Baden and Schinznach-Bad, the ancient castles, the landscape, and the many old museums in the canton. Hillwalking is another tourist attraction but is of only limited significance.
2469
Ab
Ab
2470
Aba
Aba may refer to:
2471
Ababda people
The Ababda ( or ) are an Arab or Beja tribe in eastern Egypt and Sudan. Historically, most were Bedouins living in the area between the Nile and the Red Sea, with some settling along the trade route linking Korosko with Abu Hamad. Numerous traveler accounts from the nineteenth century report that some Ababda at that time still spoke Beja or a language of their own, hence many secondary sources consider the Ababda to be a Beja subtribe. Most Ababda now speak Arabic and identify as an Arab tribe from the Hijaz. The Ababda have a total population of over 250,000 people. Origin and history. Ababda tribal origin narratives identify them as an Arab people from the Hijaz, descended from Zubayr ibn al-Awwam (possibly through his son Abd Allah ibn al-Zubayr) following the Muslim conquest of Egypt. Many published sources in Western languages identify the Ababda as a subtribe of the Beja, or as descendants of speakers of a Cushitic language. Language. Arabic. Today, virtually all Ababda communities speak Arabic. There is no oral tradition of having spoken any other language prior to Arabic, in keeping with Ababda Arab origin narratives. In a 1996 study, Rudolf de Jong found that the Ababda dialect of Arabic was quite similar to that of the Shukriya people of the Sudan, and concluded that it was an extension of the northern Sudanese dialect area. Alfred von Kremer reported in 1863 that the Ababda had developed an Arabic-based thieves' cant that only they understood. Ababda or Beja Language. The Ababda may have spoken a dialect of Beja before Arabic, but if so, nothing of that dialect is preserved today. A distinct language being spoken by the Ababda has been reported by several early travellers, either identified as Beja or left without further description. In around 1770 the Scottish traveller James Bruce claimed that they spoke the "Barabra" language, Nubian. At the turn of the 19th century, during the French campaign in Egypt and Syria, the engineer Dubois-Aymé wrote that the Ababda understood Arabic, but still spoke a language of their own. In the 1820s Eduard Rüppell briefly stated that the Ababda spoke their own, seemingly non-Arabic language. A similar opinion was written by Pierre Trémaux after his journey in Sudan in the late 1840s. John Lewis Burckhardt reported that in 1813 those Ababda who co-resided with the Bishari tribe spoke Beja. Alfred von Kremer believed them to be native Beja-speakers and was told that the Ababda were bilingual in Arabic, which they spoke with a heavy accent. Those who resided with the Nubians spoke Kenzi. Robert Hartmann, who visited the country in 1859/60, noted that the vast majority of the Ababda now spoke Arabic. However, in the past they used to speak a Beja dialect that was now, as he was told, solely restricted to a few nomadic families roaming the Eastern Desert. He believed that they abandoned their language in favour of Arabic due to their close contact with other arabophone tribes. The Swedish linguist Herman Almkvist, writing in 1881, counted the Ababda to the Beja and noted that most had discarded the Beja language, supposedly identical to the Bishari dialect, in favour of Arabic, although "quite a lot" were still capable of understanding and even talking Beja. Bishari informants told him that in the past, the Bishari and Ababda were the same people. Joseph Russegger, who visited the country around 1840, noted that the Ababda spoke their own language, although he added that it was heavily mixed with Arabic. He believed it to be a "Nubian Bedouin" language and implied that this language, and the Ababda customs and appearance in general, is similar to that of the Bishari. Traveller Bayard Taylor wrote in 1856 that the Ababda spoke a language different from that of the Bishari, although it "probably sprang from the same original stock." The French Orientalist Eusebe de Salle concluded in 1840, after attending a Beja conversation between Ababda and Bishari, that both understood each other reasonably well, but that the Ababda "definitely" had a language of their own. The physician Carl Benjamin Klunzinger wrote in 1878 that the Ababda would always speak Arabic while conversing with strangers, avoiding to speak their own language which he thought was a mixture of Arabic and Beja.
2472
American Quarter Horse
The American Quarter Horse, or Quarter Horse, is an American breed of horse that excels at sprinting short distances. Its name is derived from its ability to outrun other horse breeds in races of a quarter mile or less; some have been clocked at speeds up to 44 mph (70.8 km/h). The development of the Quarter Horse traces to the 1600s. The American Quarter Horse is the most popular breed in the United States, and the American Quarter Horse Association is the largest breed registry in the world, with almost three million living American Quarter Horses registered in 2014. The American Quarter Horse is well known both as a race horse and for its performance in rodeos, horse shows, and as a working ranch horse. The compact body of the American Quarter Horse is well suited for the intricate and quick maneuvers required in reining, cutting, working cow horse, barrel racing, calf roping, and other western riding events, especially those involving live cattle. The American Quarter Horse is also used in English disciplines, driving, show jumping, dressage, hunting, and many other equestrian activities. The Texas Legislature designated the American Quarter Horse as the official "State Horse of Texas" in 2009, and Oklahoma also designated the Quarter Horse as its official state horse in 2022. Breed history. Colonial era. In the 1600s on the Eastern seaboard of colonial America, imported English Thoroughbred horses were first bred with assorted native horses. One of the most famous of these early imports was Janus, a Thoroughbred who was the grandson of the Godolphin Arabian. He was foaled in 1746, and imported to colonial Virginia in 1756. The influence of Thoroughbreds like Janus contributed genes crucial to the development of the colonial "Quarter Horse". The resulting horse was small, hardy, quick, and was used as a work horse during the week and a race horse on the weekends. As flat racing became popular with the colonists, the Quarter Horse gained even more popularity as a sprinter over courses that, by necessity, were shorter than the classic racecourses of England. These courses were often no more than a straight stretch of road or flat piece of open land. When competing against a Thoroughbred, local sprinters often won. As the Thoroughbred breed became established in America, many colonial Quarter Horses were included in the original American stud books. This began a long association between the Thoroughbred breed and what would later become officially known as the "Quarter Horse", named after the race distance at which it excelled. Some Quarter Horses have been clocked at up to 44 mph. Westward expansion. In the 19th century, pioneers heading West needed a hardy, willing horse. On the Great Plains, settlers encountered horses that descended from the Spanish stock Hernán Cortés and other Conquistadors had introduced into the viceroyalty of New Spain, which became the Southwestern United States and Mexico. The horses of the West included herds of feral animals known as Mustangs, as well as horses domesticated by Native Americans, including the Comanche, Shoshoni and Nez Perce tribes. As the colonial Quarter Horse was crossed with these western horses, the pioneers found that the new crossbred had innate "cow sense", a natural instinct for working with cattle, making it popular with cattlemen on ranches. Development as a distinct breed. Early foundation sires of Quarter horse type included Steel Dust, foaled 1843; Shiloh (or Old Shiloh), foaled 1844; Old Cold Deck (1862); Lock's Rondo, one of many "Rondo" horses, foaled in 1880; Old Billy—again, one of many "Billy" horses—foaled ; Traveler, a stallion of unknown breeding, known to have been in Texas by 1889; and Peter McCue, foaled 1895, registered as a Thoroughbred but of disputed pedigree. Another early foundation sire for the breed was Copperbottom, foaled in 1828, who tracks his lineage through the Byerley Turk, a foundation sire of the Thoroughbred horse breed. The main duty of the ranch horse in the American West was working cattle. Even after the invention of the automobile, horses were still irreplaceable for handling livestock on the range. Thus, major Texas cattle ranches, such as the King Ranch, the 6666 (Four Sixes) Ranch, and the Waggoner Ranch played a significant role in the development of the modern Quarter Horse. The skills required by cowboys and their horses became the foundation of the rodeo, a contest which began with informal competition between cowboys and expanded to become a major competitive event throughout the west. The Quarter Horse dominates in events that require speed as well as the ability to handle cattle. Sprint races were also popular weekend entertainment and racing became a source of economic gain for breeders. As a result, more Thoroughbred blood was added into the developing American Quarter Horse breed. The American Quarter Horse also benefitted from the addition of Arabian, Morgan, and even Standardbred bloodlines. In 1940, the American Quarter Horse Association (AQHA) was formed by a group of horsemen and ranchers from the Southwestern United States dedicated to preserving the pedigrees of their ranch horses. After winning the 1941 Fort Worth Exposition and Fat Stock Show grand champion stallion, the horse honored with the first registration number, P-1, was Wimpy, a descendant of the King Ranch foundation sire Old Sorrel. Other sires alive at the founding of the AQHA were given the earliest registration numbers Joe Reed P-3, Chief P-5, Oklahoma Star P-6, Cowboy P-12, and Waggoner's Rainy Day P-13. The Thoroughbred race horse Three Bars, alive in the early years of the AQHA, is recognized by the American Quarter Horse Hall of Fame as one of the significant foundation sires for the Quarter Horse breed. Other significant Thoroughbred sires seen in early AQHA pedigrees include Rocket Bar, Top Deck and Depth Charge. "Appendix" and "Foundation" horses. Since the American Quarter Horse was formally established as a breed, the AQHA stud book has remained open to additional Thoroughbred blood via a performance standard. An "Appendix" American Quarter Horse is a first generation cross between a registered Thoroughbred and an American Quarter Horse or a cross between a "numbered" American Quarter Horse and an "appendix" American Quarter Horse. The resulting offspring is registered in the "appendix" of the American Quarter Horse Association's studbook, hence the nickname. Horses listed in the appendix may be entered in competition, but offspring are not initially eligible for full AQHA registration. If the Appendix horse meets certain conformational criteria and is shown or raced successfully in sanctioned AQHA events, the horse can earn its way from the appendix into the permanent studbook, making its offspring eligible for AQHA registration. Since Quarter Horse/Thoroughbred crosses continue to enter the official registry of the American Quarter Horse breed, this creates a continual gene flow from the Thoroughbred breed into the American Quarter Horse breed, which has altered many of the characteristics that typified the breed in the early years of its formation. Some breeders argue that the continued addition of Thoroughbred bloodlines are beginning to compromise the integrity of the breed standard. Some favor the earlier style of horse and have created several separate organizations to promote and register "Foundation" Quarter Horses. Modern American Quarter Horse. The American Quarter Horse is a show horse, race horse, reining and cutting horse, rodeo competitor, ranch horse, and all-around family horse. Quarter Horses are commonly used in rodeo events such as barrel racing, calf roping and team roping; and gymkhana or O-Mok-See. Other stock horse events such as cutting and reining are open to all breeds but are dominated by American Quarter Horse. The breed is not only well-suited for western riding and cattle work. Many race tracks offer Quarter Horses a wide assortment of pari-mutuel horse racing with earnings in the millions. Quarter Horses have also been trained to compete in dressage and show jumping. They are also used for recreational trail riding and in mounted police units. The American Quarter Horse has also been exported worldwide. European nations such as Germany and Italy have imported large numbers of Quarter Horses. Next to the American Quarter Horse Association (which also encompasses Quarter Horses from Canada), the second largest registry of Quarter Horses is in Brazil, followed by Australia. In the UK the breed is also becoming very popular, especially with the two Western riding Associations, the Western Horse Association and The Western Equestrian Society. The British American Quarter Horse breed society is the AQHA-UK. With the internationalization of the discipline of reining and its acceptance as one of the official seven events of the World Equestrian Games, there is a growing international interest in Quarter Horses. The American Quarter Horse is the most popular breed in the United States, and the American Quarter Horse Association is the largest breed registry in the world, with nearly 3 million American Quarter Horses registered worldwide in 2014. Breed characteristics. The Quarter Horse has a small, short, refined head with a straight profile, and a strong, well-muscled body, featuring a broad chest and powerful, rounded hindquarters. They usually stand between high, although some Halter-type and English hunter-type horses may grow as tall as . There are two main body types: the stock type and the hunter or racing type. The stock horse type is shorter, more compact, stocky and well-muscled, yet agile. The racing and hunter type Quarter Horses are somewhat taller and smoother muscled than the stock type, more closely resembling the Thoroughbred. Quarter Horses come in nearly all colors. The most common color is sorrel, a brownish red, part of the color group called chestnut by most other breed registries. Other recognized colors include bay, black, brown, buckskin, palomino, gray, dun, red dun, grullo (also occasionally referred to as blue dun), red roan, blue roan, bay roan, perlino, cremello, and white. In the past, spotted color patterns were excluded, but now with the advent of DNA testing to verify parentage, the registry accepts all colors as long as both parents are registered. Stock type. A stock horse is a horse of a type that is well suited for working with livestock, particularly cattle. Reining and cutting horses are smaller in stature, with quick, agile movements and very powerful hindquarters. Western pleasure show horses are often slightly taller, with slower movements, smoother gaits, and a somewhat more level topline – though still featuring the powerful hindquarters characteristic of the Quarter Horse. Halter type. Horses shown in-hand in Halter competition are larger yet, with a very heavily muscled appearance, while retaining small heads with wide jowls and refined muzzles. There is controversy amongst owners, breeder and veterinarians regarding the health effects of the extreme muscle mass that is currently fashionable in the specialized halter horse, which typically is and weighs in at over when fitted for halter competition. Not only are there concerns about the weight to frame ratio on the horse's skeletal system, but the massive build is also linked to hyperkalemic periodic paralysis (HYPP) in descendants of the stallion Impressive (see Genetic diseases below). Racing and hunter type. Quarter Horse race horses are bred to sprint short distances ranging from 220 to 870 yards. Thus, they have long legs and are leaner than their stock type counterparts, but are still characterized by muscular hindquarters and powerful legs. Quarter Horses race primarily against other Quarter Horses, and their sprinting ability has earned them the nickname, "the world's fastest athlete." The show hunter type is slimmer, even more closely resembling a Thoroughbred, usually reflecting a higher percentage of appendix breeding. They are shown in hunter/jumper classes at both breed shows and in open USEF-rated horse show competition. Genetic diseases. There are several genetic diseases of concern to Quarter Horse breeders:
2473
Abacá
Abacá ( ; ), Musa textilis, is a species of banana native to the Philippines. The plant grows to , and averages about . The plant, also known as Manila hemp, has great economic importance, being harvested for its fiber, also called Manila hemp, extracted from the leaf-stems. The lustrous fiber is traditionally hand-loomed into various indigenous textiles (abacá cloth or medriñaque) in the Philippines. They still figure prominently as the traditional material of the barong tagalog, the national male attire of the Philippines, as well as in sheer lace-like fabrics called "nipis" used in various clothing components. Native abacá textiles also survive into the modern era among various ethnic groups, like the "t'nalak" of the T'boli people and the "dagmay" of the Bagobo people. Abacá is also used in traditional Philippine millinery, as well as for bags, shawls, and other decorative items. The hatmaking straw made from Manila hemp is called "tagal" or "tagal straw". The fiber is also exceptionally strong, stronger than hemp and naturally salt-resistant, making it ideal for making twines and ropes (especially for maritime shipping). It became a major trade commodity in the colonial era for this reason. The abacá industry declined sharply in the mid-20th century when abacá plantations were decimated by World War II and plant diseases, as well as the invention of nylon in the 1930s (which eventually replaced the use of abacá in maritime cordage). Today, abacá is mostly used in a variety of specialized paper products including tea bags, filter paper and banknotes. Manila envelopes and Manila paper derive their name from this fiber. Abacá is classified as a hard fiber, along with coir, henequin and sisal. Abacá is grown as a commercial crop in the Philippines, Ecuador, and Costa Rica. Description. The abacá plant is stoloniferous, meaning that the plant produces runners or shoots along the ground that then root at each segment. Cutting and transplanting rooted runners is the primary technique for creating new plants, since seed growth is substantially slower. Abacá has a "false trunk" or pseudostem about in diameter. The leaf stalks (petioles) are expanded at the base to form sheaths that are tightly wrapped together to form the pseudostem. There are from 12 to 25 leaves, dark green on the top and pale green on the underside, sometimes with large brown patches. They are oblong in shape with a deltoid base. They grow in succession. The petioles grow to at least in length. When the plant is mature, the flower stalk grows up inside the pseudostem. The male flower has five petals, each about long. The leaf sheaths contain the valuable fiber. After harvesting, the coarse fibers range in length from long. They are composed primarily of cellulose, lignin, and pectin. The fruit, which is inedible and is rarely seen as harvesting occurs before the plant fruits, grows to about in length and in diameter. It has black turbinate seeds that are in diameter. Systematics. The abacá plant belongs to the banana family, Musaceae; it resembles the closely related wild seeded bananas, "Musa acuminata" and "Musa balbisiana". Its scientific name is "Musa textilis". Within the genus "Musa", it is placed in section "Callimusa" (now including the former section "Australimusa"), members of which have a diploid chromosome number of 2n" "= 20. Genetic diversity. The Philippines, especially the Bicol region in Luzon, has the most abaca genotypes and cultivars. Genetic analysis using simple sequence repeats (SSR) markers revealed that the Philippines' abaca germplasm is genetically diverse. Abaca genotypes in Luzon had higher genetic diversity than Visayas and Mindanao. Ninety-five (95) percent was attributed to molecular variance within the population, and only 5% of the molecular variance to variation among populations. Genetic analysis by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) revealed several clusters irrespective of geographical origin. History. Before synthetic textiles came into use, "M. textilis" was a major source of high quality fiber: soft, silky and fine. Ancestors of the modern abacá are thought to have originated from the eastern Philippines, where there is significant rainfall throughout the year. Wild varieties of abacá can still be found in the interior forests of the island province of Catanduanes, away from cultivated areas. Today, Catanduanes has many other modern kinds of abacá which are more competitive. For many years, breeders from various research institutions have made the cultivated varieties of Catanduanes even more competitive in local and international markets. This results in the optimum production of the island which had a consistent highest production throughout the archipelago. Europeans first came into contact with Abacá fibre when Ferdinand Magellan landed in the Philippines in 1521, as the natives were already cultivating it and utilizing it in bulk for textiles. Throughout the Spanish colonial era, it was referred to as "medriñaque" cloth. By 1897, the Philippines were exporting almost 100,000 tons of abacá, and it was one of the three biggest cash crops, along with tobacco and sugar. In fact, from 1850 through the end of the 19th century, sugar or abacá alternated with each other as the biggest export crop of the Philippines. This 19th-century trade was predominantly with the United States and the making of ropes was done mainly in New England, although in time rope-making shifted back to the Philippines. Excluding the Philippines, abacá was first cultivated on a large scale in Sumatra in 1925 under the Dutch, who had observed its cultivation in the Philippines for cordage since the nineteenth century, followed up by plantings in Central America in 1929 sponsored by the U.S. Department of Agriculture. It also was transplanted into India and Guam. Commercial planting began in 1930 in British North Borneo; at the onset of World War II, the supply from the Philippines was eliminated by the Empire of Japan. In the early 1900s, a train running from Danao to Argao would transport Philippine abacá from the plantations to Cebu City for export. The railway system was destroyed during World War II; the abaca continues to be transported to Cebu by road. After the war, the U.S. Department of Agriculture started production in Panama, Costa Rica, Honduras, and Guatemala. Today, abacá is produced primarily in the Philippines and Ecuador. The Philippines produces between 85% and 95% of the world's abacá, and the production employs 1.5 million people. Production has declined because of virus diseases. Cultivation. The plant is normally grown in well-drained loamy soil, using rhizomes planted at the start of the rainy season. In addition, new plants can be started by seeds. Growers harvest abacá fields every three to eight months after an initial growth period of 12–25 months. Harvesting is done by removing the leaf-stems after flowering but before fruit appears. The plant loses productivity between 15 and 40 years. The slopes of volcanoes provide a preferred growing environment. Harvesting generally includes several operations involving the leaf sheaths: When the processing is complete, the bundles of fiber are pale and lustrous with a length of . In Costa Rica, more modern harvest and drying techniques are being developed to accommodate the very high yields obtained there. According to the Philippine Fiber Industry Development Authority, the Philippines provided 87.4% of the world's abacá in 2014, earning the Philippines US$111.33 million. The demand is still greater than the supply. The remainder came from Ecuador (12.5%) and Costa Rica (0.1%). The Bicol region in the Philippines produced 27,885 metric tons of abacá in 2014, the largest of any Philippine region. The Philippine Rural Development Program (PRDP) and the Department of Agriculture reported that in 2009–2013, Bicol Region had 39% share of Philippine abacá production while overwhelming 92% comes from Catanduanes Island. Eastern Visayas, the second largest producer had 24% and the Davao Region, the third largest producer had 11% of the total production. Around 42 percent of the total abacá fiber shipments from the Philippines went to the United Kingdom in 2014, making it the top importer. Germany imported 37.1 percent abacá pulp from the Philippines, importing around 7,755 metric tons (MT). Sales of abacá cordage surged 20 percent in 2014 to a total of 5,093" "MT from 4,240" "MT, with the United States holding around 68 percent of the market. Pathogens. Abacá is vulnerable to a number of pathogens, notably abaca bunchy top virus, abaca bract mosaic virus, and abaca mosaic virus. Uses. Due to its strength, it is a sought after product and is the strongest of the natural fibers. It is used by the paper industry for such specialty uses such as tea bags, banknotes and decorative papers. It can be used to make handcrafts such as hats, bags, carpets, clothing and furniture. Lupis is the finest quality of abacá. Sinamay is woven chiefly from abacá. Textiles. Abacá fibers were traditionally woven into sturdy textiles and clothing in the Philippines since pre-colonial times. Along with cotton, they were the main source of textile fibers used for clothing in the pre-colonial Philippines. Abacá cloth was often compared to calico in terms of texture and was a major trade commodity in the pre-colonial maritime trade and the Spanish colonial era. There are multiple traditional types and names of abaca cloth among the different ethnic groups of the Philippines. Undyed plain abacá cloth, woven from fine fibers of abaca, is generally known as "sinamáy" in most of the islands. Abacá cloth with a more delicate texture is called "tinampipi". While especially fine lace-like abacá cloth is called "nipis" or "lupis". Fine abacá fibers may also be woven with "piña", silk, or fine cotton to create a fabric called "jusi". Traditional abacá textiles were often dyed in various colors from various natural dyes. These include blue from indigo ("tarum", "dagum", "tayum", etc.); black from ebony ("knalum" or "batulinao") leaves; red from noni roots and "sapang"; yellow from turmeric ("kalawag", "kuning", etc.); and so on. They were often woven into specific patterns, and further ornamented with embroidery, beadwork, and other decorations. Most clothing made from abacá took the form of the "baro" (also "barú" or "bayú", literally "shirt" or "clothing"), a simple collar-less shirt or jacket with close-fitting long sleeves worn by both men and women in most ethnic groups in the pre-colonial Philippines. These were paired with wraparound sarong-like skirts (for both men and women), close-fitting pants, or loincloths ("bahag"). During the Spanish colonial era, abacá cloth became known as medriñaque in Spanish (apparently derived from a native Cebuano name). They were exported to other Spanish colonies since the 16th century. A waistcoat of a native Quechua man in Peru was recorded as being made of medriñaque as early as 1584. Abacá cloth also appear in English records, spelled variously as medrinacks, medrianacks, medrianackes, and medrinacles, among other names. They were used as canvas for sails and for stiffening clothing like skirts, collars, and doublets. Philippine indigenous tribes still weave abacá-based textiles like "t'nalak", made by the Tiboli tribe of South Cotabato, and "dagmay", made by the Bagobo people. Abacá cloth is found in museum collections around the world, like the Boston Museum of Fine Arts and the Textile Museum of Canada. The inner fibers are also used in the making of hats, including the "Manila hats," hammocks, matting, cordage, ropes, coarse twines, and types of canvas. Manila rope. Manila rope is a type of rope made from manila hemp. Manila rope is very durable, flexible, and resistant to salt water damage, allowing its use in rope, hawsers, ships' lines, and fishing nets. A rope can require to break. Manila ropes shrink when they become wet. This effect can be advantageous under certain circumstances, but if it is not a wanted feature, it should be well taken into account. Since shrinkage is more pronounced the first time the rope becomes wet, new rope is usually immersed into water and put to dry before use so that the shrinkage is less than it would be if the rope had never been wet. A major disadvantage in this shrinkage is that many knots made with manila rope became harder and more difficult to untie when wet, thus becoming subject of increased stress. Manila rope will rot after a period of time when exposed to saltwater. Manila hemp rope was previously the favoured variety of rope used for executions by hanging, both in the U.K. and USA. Usually 3/4 to 1 inch diameter, boiled prior to use to take out any overelasticity. It was also used in the 19th century as whaling line. Abacá fiber was once used primarily for rope, but this application is now of minor significance.
2474
Abaddon
The Hebrew term Abaddon ( "’Ăḇaddōn", meaning "destruction", "doom"), and its Greek equivalent Apollyon (, "Apollúōn" meaning "Destroyer") appear in the Bible as both a place of destruction and an angel of the abyss. In the Hebrew Bible, "abaddon" is used with reference to a bottomless pit, often appearing alongside the place Sheol ( "Šəʾōl"), meaning the resting place of dead peoples. In the Book of Revelation of the New Testament, an angel called Abaddon is described as the king of an army of locusts; his name is first transcribed in Koine Greek (Revelation 9:11—"whose name in Hebrew is Abaddon,") as , and then translated , "Apollyon". The Vulgate and the Douay–Rheims Bible have additional notes not present in the Greek text, "in Latin "Exterminans"", "exterminans" being the Latin word for "destroyer". Etymology. According to the Brown–Driver–Briggs lexicon, the "’ăḇadōn" is an intensive form of the Semitic root and verb stem "’ăḇāḏ" "perish", transitive "destroy", which occurs 184 times in the Hebrew Bible. The Septuagint, an early Greek translation of the Hebrew Bible, renders "Abaddon" as "ἀπώλεια" (𝘢𝘱𝘰́𝘭𝘦𝘪𝘢), while the Greek "Apollýon" is the active participle of ἀπόλλυμι "apóllymi", "to destroy". Judaism. Hebrew Bible. The term "abaddon" appears six times in the Masoretic text of the Hebrew Bible; "abaddon" means destruction or "place of destruction", or the realm of the dead, and is accompanied by Sheol. Second Temple era texts. The text of the Thanksgiving Hymns—which was found in the Dead Sea Scrolls—tells of "the Sheol of Abaddon" and of the "torrents of Belial [that] burst into Abaddon". The "Biblical Antiquities" (misattributed to Philo) mentions Abaddon as a place (destruction) rather than an individual. Abaddon is also one of the compartments of Gehenna. By extension, it can mean an underworld abode of lost souls, or Gehenna. Rabbinical literature. In some legends, Abaddon is identified as a realm where the damned lie in fire and snow, one of the places in Gehenna that Moses visited. Christianity. The New Testament contains the first known depiction of "Abaddon" as an individual entity instead of a place. In Revelation 9:11, Abaddon is described as "Destroyer", the angel of the Abyss, and as the king of a plague of locusts resembling horses with crowned human faces, women's hair, lions' teeth, wings, iron breast-plates, and a tail with a scorpion's stinger that torments for five months anyone who does not have the seal of God on their foreheads. The symbolism of Revelation 9:11 leaves the identity of Abaddon open to interpretation. Protestant commentator Matthew Henry (1708) believed Abaddon to be the Antichrist, whereas the Jamieson-Fausset-Brown Bible Commentary (1871) and Henry Hampton Halley (1922) identified the angel as Satan. Early in John Bunyan's The Pilgrim's Progress the Christian pilgrim fights "over half a day" long with the demon Apollyon. This book permeated Christianity in the English-speaking world for 300 years after its first publication in 1678. In contrast, the Methodist publication "The Interpreter's Bible" states, "Abaddon, however, is an angel not of Satan but of God, performing his work of destruction at God's bidding", citing the context at Revelation chapter 20, verses 1 through 3. Jehovah's Witnesses also cite Revelation 20:1-3 where the angel having "the key of the abyss" is actually shown to be a representative of God, concluding that "Abaddon" is another name for Jesus after his resurrection. Mandaeism. Mandaean scriptures such as the "Ginza Rabba" mention the Abaddons () as part of the World of Darkness. The "Right Ginza" mentions the existence of the "upper Abaddons" () as well as the "lower Abaddons" (). The final poem of the "Left Ginza" mentions the "House of the Abaddons" (). Apocryphal texts. In the gnostic 3rd century Acts of Thomas, Abaddon is the name of a demon, or the devil himself. Abaddon is given particularly important roles in two sources, a homily entitled "The Enthronement of Abaddon" by pseudo-Timothy of Alexandria, and the "Book of the Resurrection of Jesus Christ, by Bartholomew the Apostle". In the homily by Timothy, Abaddon was first named "Muriel", and had been given the task by God of collecting the earth that would be used in the creation of Adam. Upon completion of this task, the angel was appointed as a guardian. Everyone, including the angels, demons, and corporeal entities feared him. Abaddon was promised that any who venerated him in life could be saved. Abaddon is also said to have a prominent role in the Last Judgment, as the one who will take the souls to the Valley of Josaphat. He is described in the "Book of the Resurrection of Jesus Christ" as being present in the Tomb of Jesus at the moment of the resurrection of Jesus.
2475
Abadeh
Abadeh (, also Romanized as Ābādeh) is a city and capital of Abadeh County, in Fars Province, Iran. Abadeh is situated at an elevation of in a fertile plain on the high road between Isfahan and Shiraz, from the former and from the latter. At the 2006 census, its population was 52,042, in 14,184 families. As of 2009, the population was estimated to be 59,042. It is the largest city in the Northern Fars Region (South Central-Iran), which is famed for its carved wood-work, made of the wood of pear and box trees. Sesame oil, castor oil, grain, and various fruits are also produced there. The area is famous for its Abadeh rugs. An interesting fact is that Abadeh is closer, road-distance-wise to 4 provincial capitals of Isfahan (193 km), Yasuj (197 km), Yazd (217 km), and Shahrekord (237 km) compared to the distance to the provincial capital of its corresponding province, Shiraz (260 km). History. According to the texts of archaeologists, the settlement in the current area of Abadeh dates back to the first millennium BC. Nomadic Kurdish groups were the first to settle in the plain between Abadeh and Isfahan in the Achaemenid period. Remaining ancient monuments, such as the ancient castle of Izadkhas and Bahram Gur Palace in Surmaq, are proofs of the existence of culture and civilization in this geographical area. Abadeh city has a special position due to its location at the three-way communication between Isfahan, Yazd and Shiraz. Landmarks and crafts. Abadeh historical monuments include Emirate Kolah Farangi, Tymcheh Sarafyan and Khaje tomb, located in the Khoja mountains. Abadeh crafts can be embroidered in cotton. The town also produces Abadeh rugs. The rugs tend to be based on a cotton warp and have a thin, tightly knotted pile. Most Abadeh rugs are closely cut making them very flat. Although some of the older Abadehs vary in style, many of the new designs are easily recognisable. These new designs, known as Heybatlu consist of a single diamond shaped medallion in the centre with smaller medallions on each corner. The pattern is typically geometrical flowers or animals and the main colours are light reds or burnt orange on top of a dark blue background with strong green details. The corners or borders are generally ivory in colour. Although some Abadeh and Shiraz rugs appear similar Abadeh can normally be differentiated by their higher knot counts as well as the fact that the warp is invariably cotton. The rugs are almost always exclusively medium in size and the KPSI of an average Abadeh is around 90. As always in the rug-world you get what you pay for however in general Abadeh are well made and fairly popular items, particularly in modern interiors or those with a Mediterranean or North African style. Geographical location. Abadeh city is located in the northernmost point of Fars province. Abadeh is connected to Isfahan province from the north and west, Safacity and Eqlid from the south, and Yazd province from the east. The city is located north of Shiraz, south of Tehran, south of Isfahan, and southwest of Yazd. The geographical area of Abadeh is , which is about 11% of the total area of the province. Transportation. Expressway 65 passes through Abadeh. This situation helps Abadeh to improve its capabilities compared to the neighboring city, Eqlid. Road 78 makes connections from Abadeh to Abarkuh, Yazd Eqlid and Yasuj. It has a junction with Abadeh Shiraz Expressway 24 km south of the city. A road starts from Abadeh Ring Road to Soqad and Semirom, Road 55. The railroad from Isfahan to Shiraz passes Abadeh and there are train services at Abadeh Railway Station to Shiraz, Esfahan, Tehran and Mashad. Abadeh Airport (OISA) was planned to be built in the mid 1990s. Sport. Abadeh's main sport is football, as in the rest of the country. The main stadium is Takhti Stadium located in Mo'allem Square. The main team in Abadeh is Behineh Rahbar Abadeh F.C. which is currently playing in Iran Football's 3rd Division after finishing first in Fars Provincial League (FPL) last year. It played in Hazfi Cup 2010-11 reaching the fourth round. Air defense base. In 2012 Iran announced it had started the construction of an air defense site in the city of Abadeh. The site is planned to be the largest in the country and will house 6,000 personnel for a variety of duties, including educational ones. Geography. Climate. Abadeh features a continental semi-arid climate (Köppen climate classification "BSk") with heat and dryness over summer, and cold (extreme at times) and wet winter, with huge variations between daytime and nighttime throughout the year. The area can experience severely cold weather due to its high elevation. Handicrafts. Abadeh woodwork is world famous and its examples are kept in world museums as the best works of art. The carvings of the Marble Palace were made by the artists of this city, such as Master Ahmad Emami. In 2017, the World Council of Handicrafts (WCC) introduced Abadeh as the world city of carving. Monbat Abadeh has 150 active domestic or commercial carving workshops and 5000 carving artists. Mines. The mines located in this city are: Esteghlal Abadeh large refractory soil mine is one of the largest producers of this mineral. In addition to the Esteghlal refractory mine, there is also an industrial mine around the city where the raw materials are from tile, ceramic and brick factories in the country.
2476
Abae
Abae (, "") was an ancient town in the northeastern corner of ancient Phocis, in Greece, near the frontiers of the Opuntian Locrians, said to have been built by the Argive Abas, son of Lynceus and Hypermnestra, and grandson of Danaus. This bit of legend suggests an origin or at least an existence in the Bronze Age. Its protohistory supports a continued existence in Iron-Age antiquity. It was famous for its oracle of Apollo Abaeus, one of those consulted by Croesus, king of Lydia, and Mardonius, among others. The site of the oracle was rediscovered at Kalapodi and excavated in modern times. The results confirm an archaeological existence dating from the Bronze Age, as is suggested by the lore. History. Before the Persian invasion, the temple was richly adorned with treasuries and votive offerings. It was twice destroyed by fire; the first time by the Persians in the invasion of Xerxes in their march through Phocis (480 BCE), and a second time by the Boeotians in the Sacred or Phocian War in 346 BCE. It was rebuilt by Hadrian. Hadrian caused a smaller temple to be built near the ruins of the former one. In the new temple there were three ancient statues in brass of Apollo, Leto, and Artemis, which had been dedicated by the Abaei, and had perhaps been saved from the former temple. The ancient agora and the ancient theatre still existed in the town in the time of Pausanias. According to the statement of Aristotle, as preserved by Strabo, Thracians from the Phocian town of Abae immigrated to Euboea, and gave to the inhabitants the name of Abantes. Oracle. Despite destruction of the town, the oracle was still consulted, e.g. by the Thebans before the Battle of Leuctra in 371 BCE. The temple, along with the village of the same name, may have escaped destruction during the Third Sacred War (355–346 BCE), due to the respect given to the inhabitants; however, it was in a very dilapidated state when seen by Pausanias in the 2nd century CE, though some restoration, as well as the building of a new temple, was undertaken by Emperor Hadrian. The sanctity of the shrine ensured certain privileges to the people of Abae, and these were confirmed by the Romans. The Persians did not reflect this opinion and would destroy all the temples that they overcame, Abae included. The Greeks pledged not to rebuild them as a memorial of the ravages of the Persians. Among the most exciting recent archaeological discoveries in Greece is the recognition that the sanctuary site near the modern village of Kalapodi is not only the site of the oracle of Apollon at Abae, but that it was in constant use for cult practices from early Mycenaean times to the Roman period. It is thus the first site where the archaeology confirms the continuity of Mycenaean and Classical Greek religion, which has been inferred from the presence of the names of Classical Greek divinities on Linear B texts from Pylos and Knossos. The fortified site described below, originally identified as Abae by Colonel William Leake in the 19th century, is much more likely to be that of the Sanctuary of Artemis at Hyampolis: "The polygonal walls of the acropolis may still be seen in a fair state of preservation on a circular hill standing about 500 ft. [150 m] above the little plain of Exarcho; one gateway remains, and there are also traces of town walls below. The temple site was on a low spur of the hill, below the town. An early terrace wall supports a precinct in which are a stoa and some remains of temples; these were excavated by the British School at Athens in 1894, but very little was found." The oracle was mentioned in Oedipus Rex.
2477
Abakan
Abakan (; Khakas: , "Ağban", or , "Abaxan") is the capital city of Khakassia, Russia, located in the central part of the Minusinsk Depression, at the confluence of the Yenisei and Abakan Rivers. As of the 2010 Census, it had a population of 165,214—a slight increase over 165,197 recorded during the 2002 Census and a further increase from 154,092 recorded during the 1989 Census. History. Abakansky "ostrog" (), also known as Abakansk (), was built at the mouth of the Abakan River in 1675. In the 1780s, the "selo" of Ust-Abakanskoye () was established in this area. It was granted town status and given its current name on 30 April 1931. Chinese exiles. In 1940, Russian construction workers found ancient ruins during the construction of a highway between Abakan and Askiz. When the site was excavated by Soviet archaeologists in 1941–1945, they realized that they had discovered a building absolutely unique for the area: a large (1500 square meters) Chinese-style, likely Han dynasty era (206 BC–220 AD) palace. The identity of the high-ranking personage who lived luxuriously in Chinese style, far outside the Han Empire's borders, has remained a matter for discussion ever since. Russian archaeologist surmised, based on circumstantial evidence, that the palace may have been the residence of Li Ling, a Chinese general who had been defeated by the Xiongnu in 99 BCE, and defected to them as a result. While this opinion has remained popular, other views have been expressed as well. More recently, for example, it was claimed by as the residence of Lu Fang (), a Han throne pretender from the Guangwu era. Lithuanian exiles. In the late 18th and during the 19th century, Lithuanian participants in the 1794, 1830–1831, and 1863 rebellions against Russian rule were exiled to Abakan. A group of camps was established where prisoners were forced to work in the coal mines. After Stalin's death, Lithuanian exiles from the nearby settlements moved in. Administrative and municipal status. Abakan is the capital of the republic. Within the framework of administrative divisions, it is incorporated as the City of Abakan—an administrative unit with the status equal to that of the districts As a municipal division, the City of Abakan is incorporated as Abakan Urban Okrug. Economy. The city has an industry enterprises, Katanov State University of Khakasia, and three theatres. Furthermore, it has a commercial center that produces footwear, foodstuffs, and metal products. Transportation. Abakan (together with Tayshet) was a terminal of the major Abakan-Taishet Railway. Now it is an important railway junction. The city is served by the Abakan International Airport. Military. The 100th Air Assault Brigade of the Russian Airborne Troops was based in the city until circa  1996. Sites. Abakan's sites of interest include: Sports. Bandy, similar to hockey, is one of the most popular sports in the city. Sayany-Khakassia was playing in the top-tier Super League in the 2012–13 season but was relegated for the 2013–14 season and has been playing in the Russian Bandy Supreme League ever since. Russian Government Cup was played here in 1988 and in 2012. Geography. Climate. Abakan has a borderline humid continental (Köppen climate classification "Dwb")/cold semi-arid climate (Köppen "BSk"). Temperature differences between seasons are extreme, which is typical for Siberia. Precipitation is concentrated in the summer and is less common because of rain shadows from nearby mountains. Local government. The structure of the local government in the city of Abakan is as follows: The council consists of 28 deputies. Deputies are elected in single-member constituencies and on party lists. Elections of deputies of the VI convocation were held on a single voting day in 2018. In 2021, the annual Nikolai Bulakin Prize of Abakan was established for outstanding services and achievements in the city's development. The award includes a monetary reward of 200,000 rubles and a diploma.
2482
Arc de Triomphe
The Arc de Triomphe de l'Étoile (, , ; ) is one of the most famous monuments in Paris, France, standing at the western end of the Champs-Élysées at the centre of Place Charles de Gaulle, formerly named Place de l'Étoile—the "étoile" or "star" of the juncture formed by its twelve radiating avenues. The location of the arc and the plaza is shared between three arrondissements, 16th (south and west), 17th (north), and 8th (east). The Arc de Triomphe honours those who fought and died for France in the French Revolutionary and Napoleonic Wars, with the names of all French victories and generals inscribed on its inner and outer surfaces. Beneath its vault lies the Tomb of the Unknown Soldier from World War I. The central cohesive element of the "Axe historique" (historic axis, a sequence of monuments and grand thoroughfares on a route running from the courtyard of the Louvre to the Grande Arche de la Défense), the Arc de Triomphe was designed by Jean Chalgrin in 1806; its iconographic programme pits heroically nude French youths against bearded Germanic warriors in chain mail. It set the tone for public monuments with triumphant patriotic messages. Inspired by the Arch of Titus in Rome, Italy, the Arc de Triomphe has an overall height of , width of and depth of , while its large vault is high and wide. The smaller transverse vaults are high and wide. Three weeks after the Paris victory parade in 1919 (marking the end of hostilities in World War I), Charles Godefroy flew his Nieuport biplane under the arch's primary vault, with the event captured on newsreel. Paris's Arc de Triomphe was the tallest triumphal arch until the completion of the Monumento a la Revolución in Mexico City in 1938, which is high. The Arch of Triumph in Pyongyang, completed in 1982, is modeled on the Arc de Triomphe and is slightly taller at . The Grande Arche in La Défense near Paris is 110 metres high. Although it is not named an Arc de Triomphe, it has been designed on the same model and from the perspective of the Arc de Triomphe. It qualifies as the world's tallest arch. History. Construction and late 19th century. The Arc de Triomphe is located on the right bank of the Seine at the centre of a dodecagonal configuration of twelve radiating avenues. It was commissioned in 1806, after the victory at Austerlitz by Emperor Napoleon at the peak of his fortunes. Laying the foundations alone took two years and, in 1810, when Napoleon entered Paris from the west with his new bride, Archduchess Marie-Louise of Austria, he had a wooden mock-up of the completed arch constructed. The architect, Jean Chalgrin, died in 1811 and the work was taken over by Jean-Nicolas Huyot. During the Bourbon Restoration, construction was halted, and it would not be completed until the reign of King Louis-Philippe, between 1833 and 1836, by the architects Goust, then Huyot, under the direction of Héricart de Thury. The final cost was reported at about 10,000,000 francs (equivalent to an estimated €65 million or $75 million in 2020). On 15 December 1840, brought back to France from Saint Helena, Napoleon's remains passed under it on their way to the Emperor's final resting place at Les Invalides. Before burial in the Panthéon, the body of Victor Hugo was displayed under the Arc on the night of 22 May 1885. 20th century. The sword carried by the "Republic" in the "Marseillaise" relief broke off on the day, it is said, that the Battle of Verdun began in 1916. The relief was immediately hidden by tarpaulins to conceal the accident and avoid any undesired ominous interpretations. On 7 August 1919, Charles Godefroy successfully flew his biplane under the Arc. Jean Navarre was the pilot who was tasked to make the flight, but he died on 10 July 1919 when he crashed near Villacoublay while training for the flight. Following its construction, the Arc de Triomphe became the rallying point of French troops parading after successful military campaigns and for the annual Bastille Day military parade. Famous victory marches around or under the Arc have included the Germans in 1871, the French in 1919, the Germans in 1940, and the French and Allies in 1944 and 1945. A United States postage stamp of 1945 shows the "Arc de Triomphe" in the background as victorious American troops march down the Champs-Élysées and U.S. airplanes fly overhead on 29 August 1944. After the interment of the Unknown Soldier, however, all military parades (including the aforementioned post-1919) have avoided marching through the actual arch. The route taken is up to the arch and then around its side, out of respect for the tomb and its symbolism. Both Hitler in 1940 and de Gaulle in 1944 observed this custom. By the early 1960s, the monument had grown very blackened from coal soot and automobile exhaust, and during 1965–1966 it was cleaned through bleaching. In the prolongation of the Avenue des Champs-Élysées, a new arch, the Grande Arche de la Défense, was built in 1982, completing the line of monuments that forms Paris's "Axe historique". After the "Arc de Triomphe du Carrousel" and the "Arc de Triomphe de l'Étoile", the "Grande Arche" is the third arch built on the same perspective. In 1995, the Armed Islamic Group of Algeria placed a bomb near the Arc de Triomphe which wounded 17 people as part of a campaign of bombings. 21st century. In late 2018, the Arc de Triomphe suffered acts of vandalism as part of the Yellow vests protests. The vandals sprayed the monument with graffiti and ransacked its small museum. L'Arc de Triomphe, Wrapped. In September 2021, the arc was wrapped in a silvery blue fabric and red rope, a posthumous project planned by artists Christo and Jeanne-Claude since the early 1960s. Design. Monument. The astylar design is by Jean Chalgrin (1739–1811), in the Neoclassical version of ancient Roman architecture. Major academic sculptors of France are represented in the sculpture of the "Arc de Triomphe": Jean-Pierre Cortot; François Rude; Antoine Étex; James Pradier and Philippe Joseph Henri Lemaire. The main sculptures are not integral friezes but are treated as independent trophies applied to the vast ashlar masonry masses, not unlike the gilt-bronze appliqués on Empire furniture. The four sculptural groups at the base of the Arc are "The Triumph of 1810" (Cortot), "Resistance" and "Peace" (both by Antoine Étex), and the most renowned of them all, "Departure of the Volunteers of 1792" commonly called "La Marseillaise" (François Rude). The face of the allegorical representation of France calling forth her people on this last was used as the belt buckle for the honorary rank of Marshal of France. Since the fall of Napoleon (1815), the sculpture representing "Peace" is interpreted as commemorating the Peace of 1815. In the attic above the richly sculptured frieze of soldiers are 30 shields engraved with the names of major French victories in the French Revolution and Napoleonic wars. The inside walls of the monument list the names of 660 people, among which are 558 French generals of the First French Empire; The names of those generals killed in battle are underlined. Also inscribed, on the shorter sides of the four supporting columns, are the names of the major French victories in the Napoleonic Wars. The battles that took place in the period between the departure of Napoleon from Elba to his final defeat at Waterloo are not included. For four years from 1882 to 1886, a monumental sculpture by Alexandre Falguière topped the arch. Titled "Le triomphe de la Révolution" ("The Triumph of the Revolution"), it depicted a chariot drawn by horses preparing "to crush Anarchy and Despotism". Inside the monument, a permanent exhibition, conceived by artist Maurice Benayoun and architect Christophe Girault, opened in February 2007. Tomb of the Unknown Soldier. Beneath the Arc is the Tomb of the Unknown Soldier from World War I. Interred on Armistice Day 1920, an eternal flame burns in memory of the dead who were never identified (now in both world wars). A ceremony is held at the Tomb of the Unknown Soldier every 11 November on the anniversary of the Armistice of 11 November 1918 signed by the Entente Powers and Germany in 1918. It was originally decided on 12 November 1919 to bury the unknown soldier's remains in the Panthéon, but a public letter-writing campaign led to the decision to bury him beneath the Arc de Triomphe. The coffin was put in the chapel on the first floor of the Arc on 10 November 1920, and put in its final resting place on 28 January 1921. The slab on top bears the inscription: "Ici repose un soldat français mort pour la Patrie, 1914–1918" ("Here rests a French soldier who died for the Fatherland, 1914–1918"). In 1961, U.S. President John F. Kennedy and First Lady Jacqueline Kennedy paid their respects at the Tomb of the Unknown Soldier, accompanied by President Charles de Gaulle. After the 1963 assassination of President Kennedy, Mrs. Kennedy remembered the eternal flame at the Arc de Triomphe and requested that an eternal flame be placed next to her husband's grave at Arlington National Cemetery in Virginia. Access. The "Arc de Triomphe" is accessible by the RER and Métro, with exit at the Charles de Gaulle–Étoile station. Because of heavy traffic on the roundabout of which the Arc is the centre, it is recommended that pedestrians use one of two underpasses located at the "Champs Élysées" and the "Avenue de la Grande Armée". A lift will take visitors almost to the top – to the attic, where a small museum contains large models of the Arc and tells its story from the time of its construction. Another 40 steps remain to climb to reach the top, the "terrasse", from where one can enjoy a panoramic view of Paris. The location of the arc, as well as the Place de l'Étoile, is shared between three arrondissements, 16th (south and west), 17th (north), and 8th (east). Replicas. While many structures around the world resemble the "Arc de Triomphe", some were actually inspired by it. Replicas that used its design as a model include Arch of Triumph in Pyongyang, North Korea; Arcul de Triumf in Bucharest, Romania; Rosedale World War I Memorial Arch in Kansas City, Kansas, US; and a miniature version at the Paris Casino in Las Vegas, US.
2484
ATM
ATM or atm often refers to: ATM or atm may also refer to:
2487
Amazonite
Amazonite, also known as Amazonstone, is a green tectosilicate mineral, a variety of the potassium feldspar called microcline. Its chemical formula is KAlSi3O8, which is polymorphic to orthoclase. Its name is taken from that of the Amazon River, from which green stones were formerly obtained, though it is unknown whether those stones were amazonite. Although it has been used for jewellery for well over three thousand years, as attested by archaeological finds in Middle and New Kingdom Egypt and Mesopotamia, no ancient or medieval authority mentions it. It was first described as a distinct mineral only in the 18th century. Green and greenish-blue varieties of potassium feldspars that are predominantly triclinic are designated as amazonite. It has been described as a "beautiful crystallized variety of a bright verdigris-green" and as possessing a "lively green colour." It is occasionally cut and used as a gemstone. Occurrence. Amazonite is a mineral of limited occurrence. In Bronze Age Egypt, it was mined in the southern Eastern Desert at Gebel Migif. In early modern times, it was obtained almost exclusively from the area of Miass in the Ilmensky Mountains, southwest of Chelyabinsk, Russia, where it occurs in granitic rocks. Amazonite is now known to occur in various places around the globe. Those places are, among others, as follows: Australia: China: Libya: Mongolia: South Africa: Sweden: United States: Color. For many years, the source of amazonite's color was a mystery. Some people assumed the color was due to copper because copper compounds often have blue and green colors. A 1985 study suggests that the blue-green color results from quantities of lead and water in the feldspar. Subsequent 1998 theoretical studies by A. Julg expand on the potential role of aliovalent lead in the color of microcline. Other studies suggest the colors are associated with the increasing content of lead, rubidium, and thallium ranging in amounts between 0.00X and 0.0X in the feldspars, with even extremely high contents of PbO, lead monoxide, (1% or more) known from the literature. A 2010 study also implicated the role of divalent iron in the green coloration. These studies and associated hypotheses indicate the complex nature of the color in amazonite; in other words, the color may be the aggregate effect of several mutually inclusive and necessary factors. Health. A 2021 study by the German Institut für Edelsteinprüfung (EPI) found that the amount of lead that leaked from an sample of Amazonite into an acidic solution simulating saliva exceeded European Union standard DIN EN 71-3:2013's recommended amount by five times. This experiment was to simulate a child swallowing Amazonite, and could also apply to new wellness practices such as inserting the mineral into oils or drinking water for days.
2490
Ambrosius Bosschaert
Ambrosius Bosschaert the Elder (18 January 1573 – 1621) was a Flemish-born Dutch still life painter and art dealer. He is recognised as one of the earliest painters who created floral still lifes as an independent genre. He founded a dynasty of painters who continued his style of floral and fruit painting and turned Middelburg into the leading centre for flower painting in the Dutch Republic. Biography. He was born in Antwerp, where he started his career, but he spent most of it in Middelburg (1587–1613), where he moved with his family because of the threat of religious persecution. He specialized in painting still lifes with flowers, which he signed with the monogram AB (the B in the A). At the age of twenty-one, he joined the city's Guild of Saint Luke and later became dean. Not long after, Bosschaert married and established himself as a leading figure in the fashionable floral painting genre. He had three sons who all became flower painters: Ambrosius II, Johannes and Abraham. His brother-in-law Balthasar van der Ast also lived and worked in his workshop and accompanied him on his travels. Bosschaert later worked in Amsterdam (1614), Bergen op Zoom (1615–1616), Utrecht (1616–1619), and Breda (1619). In 1619 when he moved to Utrecht, his brother-in-law van der Ast entered the Utrecht Guild of St. Luke, where the renowned painter Abraham Bloemaert had just become dean. The painter Roelandt Savery (1576–1639) entered the St. Luke's guild in Utrecht at about the same time. Savery had considerable influence on the Bosschaert dynasty. After Bosschaert died in The Hague while on commission there for a flower piece, Balthasar van der Ast took over his workshop and pupils in Middelburg. Style. His bouquets were painted symmetrically and with scientific accuracy in small dimensions and normally on copper. They sometimes included symbolic and religious meanings. At the time of his death, Bosschaert was working on an important commission in the Hague. That piece is now in the collection in Stockholm. Bosschaert was one of the first artists to specialize in flower still life painting as a stand-alone subject. He started a tradition of painting detailed flower bouquets, which typically included tulips and roses, and inspired the genre of Dutch flower painting. Thanks to the booming seventeenth-century Dutch art market, he became highly successful, as the inscription on one of his paintings attests. His works commanded high prices although he never achieved the level of prestige of Jan Brueghel the Elder, the Antwerp master who contributed to the floral genre. Legacy. His sons and his pupil and brother-in-law, Balthasar van der Ast, were among those to uphold the Bosschaert dynasty which continued until the mid-17th century. It may not be a coincidence that this trend coincided with a national obsession with exotic flowers which made flower portraits highly sought after. Although he was highly in demand, he did not create many pieces because he was also employed as an art dealer.
2493
Anthroposophy
Anthroposophy is a spiritual movement which was founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to engage in spiritual discovery through a mode of thought independent of sensory experience. While much of anthroposophy is pseudoscientific, proponents claim to present their ideas in a manner that is verifiable by rational discourse and say that they seek precision and clarity comparable to that obtained by scientists investigating the physical world. Anthroposophy has its roots in German idealism, mystical philosophies, and pseudoscience including racist pseudoscience. Steiner chose the term "anthroposophy" (from Greek , 'human', and "sophia", 'wisdom') to emphasize his philosophy's humanistic orientation. He defined it as "a scientific exploration of the spiritual world", Others have variously called it a "philosophy and cultural movement", a "spiritual movement", a "spiritual science", or "a system of thought". Anthroposophical ideas have been employed in alternative movements in many areas including education (both in Waldorf schools and in the Camphill movement), agriculture, medicine, banking, organizational development, and the arts. The main organization for advocacy of Steiner's ideas, the Anthroposophical Society, is headquartered at the Goetheanum in Dornach, Switzerland. Anthroposophy's supporters include writers Saul Bellow, and Selma Lagerlöf, painters Piet Mondrian, Wassily Kandinsky and Hilma af Klint, filmmaker Andrei Tarkovsky, child psychiatrist Eva Frommer, music therapist Maria Schüppel, Romuva religious founder Vydūnas, and former president of Georgia Zviad Gamsakhurdia. Though several prominent members of the Nazi Party were supporters of anthroposophy and its movements, including (an agriculturalist), SS colonel Hermann Schneider, and Gestapo chief Heinrich Müller, anti-Nazis such as Traute Lafrenz, a member of the White Rose resistance movement, were also followers. Rudolf Hess, the adjunct Führer, was a patron of Waldorf schools and a staunch defender of biodynamic agriculture. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history". Many scientists, physicians, and philosophers, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Some of Steiner's ideas that are unsupported or disproven by modern science, including: racial evolution, clairvoyance (Steiner claimed he was clairvoyant), and the Atlantis myth. History. The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his "Philosophy of Freedom" (also translated as "The Philosophy of Spiritual Activity" and "Intuitive Thinking as a Spiritual Path"). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought. By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to draw the attention of others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges. By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections. By this time, Steiner had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development. In 1912, Steiner broke away from the "Theosophical Society" to found an independent group, which he named the "Anthroposophical Society." After World War I, members of the young society began applying Steiner's ideas to create cultural movements in areas such as traditional and special education, farming, and medicine. By 1923, a schism had formed between older members, focused on inner development, and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for "Spiritual Science". As a spiritual basis for the reborn movement, Steiner wrote a "Foundation Stone Meditation" which remains a central touchstone of anthroposophical ideas. Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party. In reality, Steiner had both enemies and loyal supporters in the upper echelons of the Nazi regime. Staudenmaier speaks of the "polycratic party-state apparatus", so Nazism's approach to Anthroposophy was not characterized by monolithic ideological unity. When Hess defected to UK, their most powerful protector was gone, but Anthroposophists were still not left without supporters among higher-placed Nazis. The Third Reich had banned almost all esoteric organizations, pretending that these are controlled by Jews. The truth was that while Anthroposophists complained of bad press, they were to a surprising extent let be by the Nazi regime, "including outspokenly supportive pieces in the "Völkischer Beobachter"". Ideological purists from Sicherheitsdienst argued largely in vain against Anthroposophy. According to Staudenmaier, "The prospect of unmitigated persecution was held at bay for years in a tenuous truce between pro-anthroposophical and anti-anthroposophical Nazi factions." According to Hans Büchenbacher, an anthroposophist, the Secretary General of the General Anthroposophical Society, Guenther Wachsmuth, as well as Steiner's widow, Marie Steiner, were “completely pro-Nazi.” Marie Steiner-von Sivers, Guenther Wachsmuth, and Albert Steffen, had publicly expressed sympathy for the Nazi regime since its beginnings; led by such sympathies of their leadership, the Swiss and German Anthroposophical organizations chose for a path conflating accommodation with collaboration, which in the end ensured that while the Nazi regime hunted the esoteric organizations, Gentile Anthroposophists from Nazi Germany and countries occupied by it were let be to a surprising extent. Of course they had some setbacks from the enemies of Anthroposophy among the upper echelons of the Nazi regime, but Anthroposophists also had loyal supporters among them, so overall Gentile Anthroposophists were not badly hit by the Nazi regime. By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas. Etymology and earlier uses of the word. "Anthroposophy" is an amalgam of the Greek terms ( 'human') and ( 'wisdom'). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man." The first known use of the term "anthroposophy" occurs within "Arbatel de magia veterum, summum sapientiae studium", a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his "Anthroposophia Theomagica, or a discourse of the nature of man and his state after death." The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term "anthroposophy" to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term "anthroposophy" to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work "Anthropology: The Study of the Human Soul". In 1872, the philosopher of religion Gideon Spicker used the term "anthroposophy" to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy." In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication. In the early 1900s, Steiner began using the term "anthroposophy" (i.e. human wisdom) as an alternative to the term "theosophy" (i.e. divine wisdom). Central ideas. Spiritual knowledge and freedom. Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved "imagination", "inspiration", and "intuition". Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science. Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself. Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses. Nature of the human being. In "Theosophy", Steiner suggested that human beings unite a "physical body" of substances gathered from and returning to the inorganic world; a "life body" (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings. Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights. Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life. Steiner described some conditions that determine the interdependence of a person's lives, or karma. Evolution. The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals "devolve" from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth. Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture. Ethics. The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold. Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the "Representative of Humanity", also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes. Claimed applications. Steiner/Waldorf education. This is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada. The schools have been founded in a variety of communities: for example in the "favelas" of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions. Biodynamic agriculture. Biodynamic agriculture, is a form of alternative agriculture based on pseudo-scientific and esoteric concepts. It is also the first intentional form of organic farming, began in 1924, when Rudolf Steiner gave a series of lectures published in English as "The Agriculture Course". Steiner is considered one of the founders of the modern organic farming movement. Anthroposophical medicine. Anthroposophical medicine is a form of alternative medicine based on pseudoscientific and occult notions rather than in science-based medicine. Most anthroposophic medical preparations are highly diluted, like homeopathic remedies, while harmless in of themselves, using them in place of conventional medicine to treat illness is ineffective and risks adverse consequences. One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit. Special needs education and services. In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education. Architecture. Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects. Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia. ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture. Eurythmy. Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music. Social finance and entrepreneurship. Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the "Gemeinschaftsbank für Leihen und Schenken" in Bochum, Germany, founded in 1974. Socially responsible banks founded out of anthroposophy include Triodos Bank, founded in the Netherlands in 1980 and also active in the UK, Germany, Belgium, Spain and France. Other examples include Cultura Sparebank which dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s, La Nef in France and RSF Social Financein San Francisco. Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance. Organizational development, counselling and biography work. Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries. Various forms of biographic and counselling work have been developed on the basis of anthroposophy. Speech and drama. There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers ("speech formation", also known as "Creative Speech") and the "Chekhov Method" originated by Michael Chekhov (nephew of Anton Chekhov). Art. Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9-metre tall sculpture titled "The Representative of Humanity", on display at the Goetheanum. Social goals. For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is "Toward Social Renewal". Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society: Esoteric path. Paths of spiritual development. According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book "How To Know Higher Worlds". The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science. Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation. In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality. Prerequisites to and stages of inner development. Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities: Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress. Spiritual exercises. Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality. Place in Western philosophy. Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception." Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking. Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis). Christian and Jewish mystical thought have also influenced the development of anthroposophy. Union of science and spirit. Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience. Relationship to religion. Christ as the center of earthly evolution. Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby: Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term. Divergence from conventional Christian thought. Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements: According to Jane Gilmer, "Jung and Steiner were both versed in ancient gnosis and both envisioned a paradigmatic shift in the way it was delivered." As Gilles Quispel put it, "After all, Theosophy is a pagan, Anthroposophy a Christian form of modern Gnosis." Maria Carlson stated "Theosophy and Anthroposophy are fundamentally Gnostic systems in that they posit the dualism of Spirit and Matter." R. McL. Wilson in "The Oxford Companion to the Bible" agrees that Steiner and Anthroposophy are under the influence of gnosticism. Judaism. Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction. Steiner has financed the publication of the book "Die Entente-Freimaurerei und der Weltkrieg" (1919) by ; Steiner also wrote the foreword for the book, partly based upon his own ideas. The publication comprised a conspiracy theory according to whom World War I was a consequence of a collusion of Freemasons and Jews - still favorite scapegoats of the conspiracy theorists - their purpose being the destruction of Germany. The writing was later enthusiastically received by the Nazi Party. According to Dick Taverne Steiner was a Nazi (i.e. a member of the NSDAP). In his later life, Steiner was accused by the Nazis of being a Jew, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps. Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy. There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (stand as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa. Christian Community. Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the "Movement for Religious Renewal", now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination. Reception. Anthroposophy's supporters include Saul Bellow, Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific. Scientific basis. Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries. Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy "Geisteswissenschaft" (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences. Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world." Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content. Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what she calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis. Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results. Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation." According to Dan Dugan, Steiner was a champion of the following pseudoscientific claims, also championed by Waldorf schools: Religious nature. As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory. The teachings of Anthroposophy are essentially Christian Gnosticism. Indeed, according to the official stance of the Catholic Church, Anthroposophy is "a neognostic heresy". Other heresiologists agree. The Lutheran (Missouri Sinod) apologist and heresiologist Eldon K. Winker said that Steiner had the same Christology as Cerinthus. Indeed, Steiner thought that Jesus and Christ were two separated beings, who got fused for a while. Statements on race. Some anthroposophical ideas challenged the National Socialist racialist and nationalistic agenda. In contrast, some American educators have criticized Waldorf schools for failing to equally include the fables and myths of all cultures, instead favoring European stories over African ones. In response to such critiques, the Anthroposophical Society in America published in 1998 a statement clarifying its stance: We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race. Tommy Wieringa, a Dutch writer who grew among Anthroposophists, commenting upon an essay by the Anthroposophist , he wrote "It was a meeting of old acquaintances: Nazi leaders such as Rudolf Hess and Heinrich Himmler already recognized a kindred spirit in Rudolf Steiner, with his theories about racial purity, esoteric medicine and biodynamic agriculture." The racism of Anthroposophy is spiritual and paternalistic (i.e. benevolent), while the racism of fascism is materialistic and often malign. Olav Hammer, university professor expert in new religious movements and Western esotericism, confirms that now the racist and anti-Semitic character of Steiner's teachings can no longer be denied, even if that is "spiritual racism".
2494
Aurochs
The aurochs (Bos primigenius) ( or ) is an extinct cattle species, considered to be the wild ancestor of modern domestic cattle. With a shoulder height of up to in bulls and in cows, it was one of the largest herbivores in the Holocene; it had massive elongated and broad horns that reached in length. The aurochs was part of the Pleistocene megafauna. It probably evolved in Asia and migrated west and north during warm interglacial periods. The oldest known aurochs fossils found in India and North Africa date to the Middle Pleistocene and in Europe to the Holstein interglacial. As indicated by fossil remains in Northern Europe, it reached Denmark and southern Sweden during the Holocene. The aurochs declined during the late Holocene due to habitat loss and hunting, and became extinct when the last individual died in 1627 in Jaktorów forest in Poland. The aurochs is depicted in Paleolithic cave paintings, Neolithic petroglyphs, Ancient Egyptian reliefs and Bronze Age figurines. It symbolised power, sexual potency and prowess in religions of the ancient Near East. Its horns were used in votive offerings, as trophies and drinking horns. Two aurochs domestication events occurred during the Neolithic Revolution. One gave rise to the domestic cattle ("Bos taurus") in the Fertile Crescent in the Near East that was introduced to Europe via the Balkans and the coast of the Mediterranean Sea. Hybridisation between aurochs and early domestic cattle occurred during the early Holocene. Domestication of the Indian aurochs led to the zebu cattle ("Bos indicus") that hybridised with early taurine cattle in the Near East about 4,000 years ago. Some modern cattle breeds exhibit features reminiscent of the aurochs, such as the dark colour and light eel stripe along the back of bulls, the lighter colour of cows, or an aurochs-like horn shape. Etymology. Both "aur" and "ur" are Germanic or Celtic words meaning "wild ox". The Old High German words "ūr" meaning "primordial" and "ohso" for "ox" were compounded to "ūrohso", which became the early modern "Aurochs". The Latin word "urus" was used for wild ox from the Gallic Wars onwards. The use of the plural form "" in English is a direct parallel of the German plural "Ochsen" and recreates the same distinction by analogy as English singular "ox" and plural "oxen". "Aurochs" is both the singular and the plural term used to refer to the animal. Taxonomy and evolution. The scientific name "Bos taurus" was introduced by Carl Linnaeus in 1758 for feral cattle in Poland. The scientific name "Bos primigenius" was proposed for the aurochs by Ludwig Heinrich Bojanus in 1825 (this was dated to 1827 by some authors) who described the skeletal differences between the aurochs and domestic cattle. The name "Bos namadicus" was used by Hugh Falconer in 1859 for cattle fossils found in Nerbudda deposits. "Bos primigenius mauritanicus" was coined by Philippe Thomas in 1881 who described fossils found in deposits near Oued Seguen west of Constantine, Algeria. In 2003, the International Commission on Zoological Nomenclature placed "Bos primigenius" on the "Official List of Specific Names in Zoology" and thereby recognized the validity of this name for a wild species. Three aurochs subspecies are recognised: Evolution. Calibrations using fossils of 16 Bovidae species indicate that the Bovini tribe evolved about . The "Bos" and "Bison" genetic lineages are estimated to have genetically diverged from the Bovini about . The following cladogram shows the phylogenetic relationships of the aurochs based on analysis of nuclear and mitochondrial genomes in the Bovini tribe: The cold Pliocene climate caused an extension of open grassland, which supported the evolution of large grazers. "Bos acutifrons" is a possible ancestor of the aurochs, of which a fossil skull was excavated in the Sivalik Hills in India that dates to the Early Pleistocene about . Fossils of the Indian aurochs were excavated in alluvial deposits in South India dating to the Middle Pleistocene. It possibly migrated west into the Middle East during the Pleistocene. An aurochs skull excavated in Tunisia's Kef Governorate from early Middle Pleistocene strata dating about is the oldest known fossil specimen to date, indicating that the genus "Bos" might have evolved in Africa and migrated to Eurasia during the Middle Pleistocene. Middle Pleistocene aurochs fossils were also excavated in a Saharan erg in the Hoggar Mountains. The earliest aurochs fossils excavated in Europe date to the Holstein interglacial 230,000 years Before Present (BP). A mitochondrial DNA analysis showed that hybridisation between the aurochs and the steppe bison ("Bison priscus") occurred about 120,000 years ago; the European bison ("Bison bonasus") contains up to 10% aurochs ancestry. Late Pleistocene aurochs fossils were found in Affad 23 in Sudan dating to 50,000 years ago when the climate in this region was more humid than during the African humid period. Two aurochs bones found in the Romito Cave in Italy were radiocarbon dated to 20,210 and 19,351 years BP. Aurochs bones found in a cave near San Teodoro, Sicily date to the Late Epigravettian 14,785–14,781 years BP. Fossils found at various locations in Denmark date to the Holocene 9,925–2,865 years BP. Mesowear analysis of aurochs premolar teeth indicates that it changed from an abrasion-dominated grazer in the Danish Preboreal to a mixed feeder in the Boreal, Atlantic and Subboreal periods of the Holocene. Description. According to a 16th century description by Sigismund von Herberstein, the aurochs was pitch-black with a grey streak along the back; his wood carving made in 1556 was based on a culled aurochs, which he had received in Mazovia. In 1827, Charles Hamilton Smith published an image of an aurochs that was based on an oil painting that he had purchased from a merchant in Augsburg, which is thought to have been made in the early 16th century. This painting is thought to have shown an aurochs, although some authors suggested it may have shown a hybrid between an aurochs and domestic cattle, or a Polish steer. Contemporary reconstructions of the aurochs are based on skeletons and the information derived from contemporaneous artistic depictions and historic descriptions of the animal. Coat colour. Remains of aurochs hair were not known until the early 1980s. Depictions show that the North African aurochs may have had a light saddle marking on its back. Calves were probably born with a chestnut colour, and young bulls changed to black with a white eel stripe running down the spine, while cows retained a reddish-brown colour. Both sexes had a light-coloured muzzle, but evidence for variation in coat colour does not exist. Egyptian grave paintings show cattle with a reddish-brown coat colour in both sexes, with a light saddle, but the horn shape of these suggest that they may depict domesticated cattle. Many primitive cattle breeds, particularly those from Southern Europe, display similar coat colours to the aurochs, including the black colour in bulls with a light eel stripe, a pale mouth, and similar sexual dimorphism in colour. A feature often attributed to the aurochs is blond forehead hairs. According to historical descriptions of the aurochs, it had long and curly forehead hair, but none mentions a certain colour. Although the colour is present in a variety of primitive cattle breeds, it is probably a discolouration that appeared after domestication. Body shape. The proportions and body shape of the aurochs were strikingly different from many modern cattle breeds. For example, the legs were considerably longer and more slender, resulting in a shoulder height that nearly equalled the trunk length. The skull, carrying the large horns, was substantially larger and more elongated than in most cattle breeds. As in other wild bovines, the body shape of the aurochs was athletic, and especially in bulls, showed a strongly expressed neck and shoulder musculature. Therefore, the fore hand was larger than the rear, similar to the wisent, but unlike many domesticated cattle. Even in carrying cows, the udder was small and hardly visible from the side; this feature is equal to that of other wild bovines. Size. The aurochs was one of the largest herbivores in Holocene Europe. The size of an aurochs appears to have varied by region, with larger specimens in northern Europe than farther south. Aurochs in Denmark and Germany ranged in height at the shoulders between in bulls and in cows, while aurochs bulls in Hungary reached . The African aurochs was similar in size to the European aurochs in the Pleistocene, but declined in size during the transition to the Holocene; it may have also varied in size geographically. The body mass of aurochs appears to have shown some variability. Some individuals reached around , whereas those from the late Middle Pleistocene are estimated to have weighed up to . The aurochs exhibited considerable sexual dimorphism in the size of males and females. Horns. The horns were massive, reaching in length and between in diameter. Its horns grew from the skull at a 60° angle to the muzzle facing forwards and were curved in three directions, namely upwards and outwards at the base, then swinging forwards and inwards, then inwards and upwards. The curvature of bull horns was more strongly expressed than horns of cows. The basal circumference of horn cores reached in the largest Chinese specimen and in a French specimen. Some cattle breeds still show horn shapes similar to that of the aurochs, such as the Spanish fighting bull, and occasionally also individuals of derived breeds. Genetics. A well-preserved aurochs bone yielded sufficient mitochondrial DNA for a sequence analysis, which showed that its genome consists of 16,338 base pairs. Further studies using the aurochs whole genome sequence have identified candidate microRNA-regulated domestication genes. Distribution and habitat. The aurochs was widely distributed in North Africa, Mesopotamia, and throughout Europe to the Pontic–Caspian steppe, Caucasus and Western Siberia in the west and to the Gulf of Finland and Lake Ladoga in the north. Fossil horns attributed to the aurochs were found in Late Pleistocene deposits at an elevation of on the eastern margin of the Tibetan plateau close to the Heihe River in Zoigê County that date to about 26,620±600 years BP. Most fossils in China were found in plains below in Heilongjiang, Yushu, Jilin, northeastern Manchuria, Inner Mongolia, near Beijing, Yangyuan County in Hebei province, Datong and Dingcun in Shanxi province, Huan County in Gansu and in Guizhou provinces. Ancient DNA in aurochs fossils found in Northeast China indicate that the aurochs survived in the region until at least 5,000 years BP. Fossils were also excavated on the Korean Peninsula, and in the Japanese archipelago. Landscapes in Europe probably consisted of dense forests throughout much of the last few thousand years. The aurochs is likely to have used riparian forests and wetlands along lakes. Pollen of mostly small shrubs found in fossiliferous sediments with aurochs remains in China indicate that it preferred temperate grassy plains or grasslands bordering woodlands. It may have also lived in open grasslands. In the warm Atlantic period of the Holocene, it was restricted to remaining open country and forest margins, where competition with livestock and humans gradually increased leading to a successive decline of the aurochs. Extinction. In southern Sweden, the aurochs was present during the Holocene climatic optimum until at least 7,800 years BP. In Denmark, the first known local extinction of the aurochs occurred after the sea level rise on the newly formed Danish islands about 8,000–7,500 years BP, and the last documented aurochs lived in southern Jutland around 3,000 years BP. The latest known aurochs fossil in Britain dates to 3,245 years BP, and it was probably extinct by 3,000 years ago. The African aurochs may have survived until at least to the Roman period, as indicated by fossils found in Buto and Faiyum in the Nile Delta. It was still widespread in Europe during the time of the Roman Empire, when it was widely popular as a battle beast in Roman amphitheatres. Excessive hunting began and continued until it was nearly extinct. By the 13th century, the aurochs existed only in small numbers in Eastern Europe, and hunting it became a privilege of nobles and later royals. Fossils found in West Bengal indicate that the Indian aurochs may have survived until the early 12th century. The gradual extinction of the aurochs in Central Europe was concurrent with the clearcutting of large forest tracts between the 9th and 12th centuries. The population in Hungary declined since at least the 9th century and was extinct in the 13th century. Subfossil data indicate that it survived in northwestern Transylvania (in Romania) until the 14th to 16th century, in western Moldavia (also in Romania) until probably the early 17th century, and in northeastern Bulgaria and around Sofia until the 17th century at most. An aurochs horn found at a medieval site in Sofia indicates that it survived in western Bulgaria until the second half of the 17th to the first half of the 18th century. The last known aurochs herd lived in a marshy woodland in Poland's Jaktorów Forest. It decreased from around 50 individuals in the mid 16th century to four individuals by 1601. The last aurochs cow died in 1627 from natural causes. Behaviour and ecology. Aurochs formed small herds mainly in winter, but lived singly or in smaller groups during the summer. If aurochs had social behaviour similar to their descendants, social status was gained through displays and fights, in which both cows and bulls engaged. With its hypsodont jaw, the aurochs was probably a grazer, with a food selection very similar to domesticated cattle feeding on grass, twigs and acorns. Mating season was in September, and calves were born in spring. The bulls had severe fights, and evidence from the Jaktorów forest shows that these could lead to death. In autumn, aurochs fed for the winter, and got fatter and shinier than during the rest of the year. Calves stayed with their mother until they were strong enough to join and keep up with the herd on the feeding grounds. They were vulnerable to predation by grey wolf ("Canis lupus"), brown bear ("Ursus arctos"), while healthy adult aurochs probably did not have to fear predators. The lion ("Panthera leo"), tiger ("Panthera tigris") and hyena ("Crocuta crocuta") were likely predators in prehistoric times. According to historical descriptions, the aurochs was swift and could be very aggressive, but not afraid of humans. Cultural significance. In Asia. Acheulean layers in Hunasagi on India's southern Deccan Plateau yielded aurochs bones with cut marks. An aurochs bone with cut marks induced with flint was found in a Middle Paleolithic layer at the Nesher Ramla Homo site in Israel; it was dated to Marine Isotope Stage 5 about 120,000 years ago. An archaeological excavation in Israel found traces of a feast held by the Natufian culture around 12,000 years BP, in which three aurochs were eaten. This appears to be an uncommon occurrence in the culture and was held in conjunction with the burial of an older woman, presumably of some social status. Petroglyphs depicting aurochs in Gobustan Rock Art in Azerbaijan date to the Upper Paleolithic to Neolithic periods. Aurochs bones and skulls found at the settlements of Mureybet, Hallan Çemi and Çayönü indicate that people stored and shared food in the Pre-Pottery Neolithic B culture. Remains of an aurochs were also found in a necropolis in Sidon, Lebanon, dating to around 3,700 years BP; the aurochs was buried together with numerous animals, a few human bones and foods. Seals dating to the Indus Valley civilisation found in Harappa and Mohenjo-daro show an animal with curved horns like an aurochs. Aurochs figurines were made by the Maykop culture in the Western Caucasus. The aurochs is denoted in the Akkadian words rīmu and rēmu, both used in the context of hunts by rulers such as Naram-Sin of Akkad, Tiglath-Pileser I and Shalmaneser III; in Mesopotamia, it symbolised power and sexual potency, was an epithet of the gods Enlil and Shamash, denoted prowess as an epithet of the king Sennacherib and the hero Gilgamesh. Wild bulls are frequently referred to in Ugaritic texts as hunted by and sacrificed to the god Baal. An aurochs is depicted on Babylon's Ishtar Gate, constructed in the 6th century BC. In Africa. Petroglyphs depicting aurochs found in Qurta in the upper Nile valley were dated to the Late Pleistocene about 19–15,000 years BP using luminescence dating and are the oldest engravings found to date in Africa. Aurochs are part of hunting scenes in reliefs in a tomb at Thebes, Egypt dating to the 20th century BC, and in the mortuary temple of Ramesses III at Medinet Habu dating to around 1175 BC. The latter is the youngest depiction of aurochs in Ancient Egyptian art to date. In Europe. The aurochs is widely represented in Paleolithic cave paintings in the Chauvet and Lascaux caves in southern France dating to 36,000 and 21,000 years BP, respectively. Two Paleolithic rock engravings in the Calabrian Romito Cave depict an aurochs. Palaeolithic engravings showing aurochs were also found in the on the Italian island of Levanzo. Upper Paleolithic rock engravings and paintings depicting the aurochs were also found in caves on the Iberian Peninsula dating from the Gravettian to the Magdalenian cultures. Aurochs bones with chop and cut marks were found at various Mesolithic hunting and butchering sites in France, Luxemburg, Germany, the Netherlands, England and Denmark. Aurochs bones were also found in Mesolithic settlements by the Narva and Emajõgi rivers in Estonia. Aurochs and human bones were uncovered from pits and burnt mounds at several Neolithic sites in England. A cup found in the Greek site of Vaphio shows a hunting scene, in which people try to capture an aurochs. One of the bulls throws one hunter on the ground while attacking the second with its horns. The cup seems to date to Mycenaean Greece. Greeks and Paeonians hunted aurochs and used their huge horns as trophies, cups for wine, and offerings to the gods and heroes. The ox mentioned by Samus, Philippus of Thessalonica and Antipater as killed by Philip V of Macedon on the foothills of mountain Orvilos, was actually an aurochs; Philip offered the horns, which were long and the skin to a temple of Hercules. The aurochs was described in Julius Caesar's "Commentarii de Bello Gallico". Aurochs were occasionally captured and exhibited in venatio shows in Roman amphitheatres such as the Colosseum. Aurochs horns were often used by Romans as hunting horns. In the "Nibelungenlied", Sigurd kills four aurochs. During the Middle Ages, aurochs horns were used as drinking horns including the horn of the last bull; many aurochs horn sheaths are preserved today. The aurochs drinking horn at Corpus Christi College, Cambridge was engraved with the college's coat of arms in the 17th century. An aurochs head with a star between its horns and Christian iconographic elements represents the official coat of arms of Moldavia perpetuated for centuries. Aurochs were hunted with arrows, nets and hunting dogs, and its hair on the forehead was cut from the living animal; belts were made out of this hair and believed to increase the fertility of women. When the aurochs was slaughtered, the "os cordis" was extracted from the heart; this bone contributed to the mystique and magical powers that were attributed to it. In eastern Europe, the aurochs has left traces in expressions like "behaving like an aurochs" for a drunken person behaving badly, and "a bloke like an aurochs" for big and strong people. Domestication. The earliest known domestication of the aurochs dates to the Neolithic Revolution in the Fertile Crescent, where cattle hunted and kept by Neolithic farmers gradually decreased in size between 9800 and 7500 BC. Aurochs bones found at Mureybet and Göbekli Tepe are larger in size than cattle bones from later Neolithic settlements in northern Syria like Dja'de el-Mughara and Tell Halula. In Late Neolithic sites of northern Iraq and western Iran dating to the sixth millennium BC, cattle remains are also smaller but more frequent, indicating that domesticated cattle were imported during the Halaf culture from the central Fertile Crescent region. Results of genetic research indicate that the modern taurine cattle ("Bos taurus") arose from 80 aurochs tamed in southeastern Anatolia and northern Syria about 10,500 years ago. Taurine cattle spread into the Balkans and northern Italy along the Danube River and the coast of the Mediterranean Sea. Hybridisation between male aurochs and early domestic cattle occurred in central Europe between 9500 and 1000 BC. Analyses of mitochondrial DNA sequences of Italian aurochs specimens dated to 17–7,000 years ago and 51 modern cattle breeds revealed some degree of introgression of aurochs genes into south European cattle, indicating that female aurochs had contact with free-ranging domestic cattle. Cattle bones of various sizes found at a Chalcolithic settlement in the Kutná Hora District provide further evidence for hybridisation of aurochs and domestic cattle between 3000 and 2800 BC in the Bohemian region. Whole genome sequencing of a 6,750-year-old aurochs bone found in England was compared with genome sequence data of 81 cattle and single-nucleotide polymorphism data of 1,225 cattle. Results revealed that British and Irish cattle breeds share some genetic variants with the aurochs specimen; early herders in Britain might have been responsible for the local gene flow from aurochs into the ancestors of British and Irish cattle. The Murboden cattle breed also exhibits sporadic introgression of female European aurochs into domestic cattle in the Alps. Domestic cattle continued to diminish in both body and horn size until the Middle Ages. The Indian aurochs is thought to have been domesticated 10–8,000 years ago. Aurochs fossils found at the Neolithic site of Mehrgarh in Pakistan are dated to around 8,000 years BP and represent some of the earliest evidence for its domestication on the Indian subcontinent. Female Indian aurochs contributed to the gene pool of zebu ("Bos indicus") between 5,500 and 4,000 years BP during the expansion of pastoralism in northern India. The zebu initially spread eastwards to Southeast Asia. Hybridisation between zebu and early taurine cattle occurred in the Near East after 4,000 years BP coinciding with the drought period during the 4.2-kiloyear event. The zebu was introduced to East Africa about 3,500–2,500 years ago, and reached Mongolia in the 13th and 14th centuries. A third domestication event thought to have occurred in Egypt's Western Desert is not supported by results of an analysis of genetic admixture, introgression and migration patterns of 3,196 domestic cattle representing 180 populations. Breeding of aurochs-like cattle. In the early 1920s, Heinz Heck initiated a selective breeding program in Hellabrunn Zoo attempting to breed back the aurochs using several cattle breeds; the result is called Heck cattle. Herds of these cattle were released to Oostvaardersplassen, a polder in the Netherlands in the 1980s as aurochs surrogates for naturalistic grazing with the aim to restore prehistorical landscapes. Large numbers of them died of starvation during the cold winters of 2005 and 2010, and the project of no interference ended in 2018. Starting in 1996, Heck cattle were crossed with southern European cattle breeds such as Sayaguesa Cattle, Chianina and to a lesser extent Spanish Fighting Bulls in the hope of creating a more aurochs-like animal. The resulting crossbreeds are called Taurus cattle. Other breeding-back projects are the Tauros Programme and the Uruz Project. However, approaches aiming at breeding an aurochs-like phenotype do not equate to an aurochs-like genotype.
2499
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and ITU-T (formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. In the OSI reference model data link layer (layer 2), the basic transfer units are called "frames". In ATM these frames are of a fixed length (53 octets) called "cells". This differs from approaches such as Internet Protocol (IP) or Ethernet that use variable-sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent (dedicated connections that are usually preconfigured by the service provider), or switched (set up on a per-call basis using signaling and disconnected when the call is terminated). The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the synchronous optical networking and synchronous digital hierarchy (SONET/SDH) backbone of the public switched telephone network and in the Integrated Services Digital Network (ISDN) but has largely been superseded in favor of next-generation networks based on IP technology. Wireless and mobile ATM never established a significant foothold. Protocol architecture. To minimize queuing delay and packet delay variation (PDV), all ATM cells are the same small size. Reduction of PDV is particularly important when carrying voice traffic, because the conversion of digitized voice into an analogue audio signal is an inherently real-time process. The decoder needs an evenly spaced stream of data items. At the time of the design of ATM, 155 Mbit/s synchronous digital hierarchy with 135 Mbit/s payload was considered a fast optical network link, and many plesiochronous digital hierarchy links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the US, and 2 to 34 Mbit/s in Europe. At 155 Mbit/s, a typical full-length 1,500 byte Ethernet frame would take 77.42 µs to transmit. On a lower-speed 1.544 Mbit/s T1 line, the same packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over. This was considered unacceptable for speech traffic. The design of ATM aimed for a low-jitter network interface. Cells were introduced to provide short queuing delays while continuing to support datagram traffic. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise between larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice. Parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) improve performance for voice applications. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of almost 30, reducing the need for echo cancellers. Cell structure. An ATM cell consists of a 5-byte header and a 48-byte payload. ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format. ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general-purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type: network management segment, network management end-to-end, resource management, and reserved for future use. Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found. A UNI cell reserves the GFC field for a local flow control and submultiplexing system between users. This was intended to allow several terminals to share a single network connection in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default. The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each. Service types. ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis. Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 µs to transmit on a 10 Gbit/s network, reducing the motivation for small cells to reduce jitter due to contention. The increased link speeds by themselves do not eliminate jitter due to queuing. ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP, Ethernet VLANs, VxLAN, MPLS, and multi-protocol support over SONET. Virtual circuits. An ATM network must establish a connection before two parties can send cells to each other. This is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. "Call admission" is then performed by the network to confirm that the requested resources are available and that a route exists for the connection. Motivation. ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on a user-network interface (at the edge of the network), or if it is sent on a network-network interface (inside the network). As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit "is" consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in Frame Relay and the logical channel number and logical channel group number in X.25. Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths. Types. ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes. PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service "contract") and the two endpoints. ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected using ATM. SVCs were also used in attempts to replace local area networks with ATM. Routing. Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP. Traffic engineering. Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which "quality of service" (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection. VBR has real-time and non-real-time variants, and serves for "bursty" traffic. Non-real-time is sometimes abbreviated to vbr-nrt. Most traffic classes also introduce the concept of Cell-delay variation tolerance (CDVT), which defines the "clumping" of cells in time. Traffic policing. To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs): usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVt alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVt and an SCR and Maximum Burst Size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells. If the traffic on a virtual circuit is exceeding its traffic contract, as determined by the GCRA, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as potentially redundant). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been created that will discard a whole series of cells until the next packet starts. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU. Traffic shaping. Traffic shaping usually takes place in the network interface card (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate. Reference model. The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers: Deployment. ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price/performance of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum. Wireless or mobile ATM. Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as "crossover switch", which is similar to the MSC (mobile switching center) of GSM networks. The advantage of wireless ATM is its high bandwidth and high speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from the University of Cambridge Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs.
2500
Anus
The anus (Latin, 'ring' or 'circle') is an opening at the opposite end of an animal's digestive tract from the mouth. Its function is to control the expulsion of feces, the residual semi-solid waste that remains after food digestion, which, depending on the type of animal, includes: matter which the animal cannot digest, such as bones; food material after the nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; and dead or excess gut bacteria and other endosymbionts. Amphibians, reptiles, and birds use the same orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes via the therapsids. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate. The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments. In comb jellies there are species with one and sometimes two permanent anuses, species like the warty comb jelly grows an anus which then disappear when it's no longer needed. Development. In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first ("proto–" meaning "first") and the anus was formed later at the opening made by the other end of the gut. Research from 2001 shows the edges of the dent close up in the middles of protosomes, leaving openings at the ends which become the mouths and anuses.
2501
Appendix
Appendix, or its plural form appendices, may refer to:
2502
Acantharea
The Acantharea (Acantharia) are a group of radiolarian protozoa, distinguished mainly by their strontium sulfate skeletons. Acantharians are heterotrophic marine microplankton that range in size from about 200 microns in diameter up to several millimeters. Some acantharians have photosynthetic endosymbionts and hence are considered mixotrophs. Morphology. Acantharian skeletons are composed of strontium sulfate, SrSO4, in the form of mineral celestine crystal. Celestine is named for the delicate blue colour of its crystals, and is the heaviest mineral in the ocean. The denseness of their celestite ensures acantharian shells function as mineral ballast, resulting in fast sedimentation to bathypelagic depths. High settling fluxes of acantharian cysts have been observed at times in the Iceland Basin and the Southern Ocean, as much as half of the total gravitational organic carbon flux. Acantharian skeletons are composed of strontium sulfate crystals secreted by vacuoles surrounding each spicule or spine. Acantharians are unique among marine organisms for their ability to biomineralize strontium sulfate as the main component of their skeletons. However, unlike other radiolarians whose skeletons are made of silica, acantharian skeletons do not fossilize, primarily because strontium sulfate is very scarce in seawater and the crystals dissolve after the acantharians die. The arrangement of the spines is very precise, and is described by what is called the Müllerian law, which can be described in terms of lines of latitude and longitude – the spines lie on the intersections between five of the former, symmetric about an equator, and eight of the latter, spaced uniformly. Each line of longitude carries either two "tropical" spines or one "equatorial" and two "polar" spines, in alternation. The cell cytoplasm is divided into two regions: the endoplasm and the ectoplasm. The endoplasm, at the core of the cell, contains the main organelles, including many nuclei, and is delineated from the ectoplasm by a capsular wall made of a microfibril mesh. In symbiotic species, the algal symbionts are maintained in the endoplasm. The ectoplasm consists of cytoplasmic extensions used for prey capture and also contains food vacuoles for prey digestion. The ectoplasm is surrounded by a periplasmic cortex, also made up of microfibrils, but arranged into twenty plates, each with a hole through which one spicule projects. The cortex is linked to the spines by contractile myonemes, which assist in buoyancy control by allowing the ectoplasm to expand and contract, increasing and decreasing the total volume of the cell. Taxonomy. The way that the spines are joined at the center of the cell varies and is one of the primary characteristics by which acantharians are classified. The skeletons are made up of either ten diametric or twenty radial spicules. Diametric spicules cross the center of the cell, whereas radial spicules terminate at the center of the cell where they either form a tight or flexible junction depending on species. Acantharians with diametric spicules or loosely attached radial spicules are able to rearrange or shed spicules and form cysts. The morphological classification system roughly agrees with phylogenetic trees based on the alignment of ribosomal RNA genes, although the groups are mostly polyphyletic. Holacanthida seems to have evolved first and includes molecular clades A, B, and D. Chaunacanthida evolved second and includes only one molecular clade, clade C. Arthracanthida and Symphacanthida, which have the most complex skeletons, evolved most recently and constitute molecular clades E and F. Symbiosis. Many acantharians, including some in clade B (Holacanthida) and all in clades E & F (Symphiacanthida and Arthracanthida), host single-celled algae within their inner cytoplasm (endoplasm). By participating in this photosymbiosis, acantharians are essentially mixotrophs: they acquire energy through both heterotrophy and autotrophy. The relationship may make it possible for acantharians to be abundant in low-nutrient regions of the oceans and may also provide extra energy necessary to maintain their elaborate strontium sulfate skeletons. It is hypothesized that the acantharians provide the algae with nutrients (N & P) that they acquire by capturing and digesting prey in return for sugar that the algae produces during photosynthesis. It is not known, however, whether the algal symbionts benefit from the relationship or if they are simply being exploited and then digested by the acantharians. Symbiotic Holacanthida acantharians host diverse symbiont assemblages, including several genera of dinoflagellates ("Pelagodinium, Heterocapsa, Scrippsiella, Azadinium") and a haptophyte ("Chrysochromulina"). Clade E & F acantharians have a more specific symbiosis and primarily host symbionts from the haptophyte genus "Phaeocystis", although they sometimes also host "Chrysochromulina" symbionts. Clade F acantharians simultaneously host multiple species and strains of "Phaeocystis" and their internal symbiont community does not necessarily match the relative availability of potential symbionts in the surrounding environment. The mismatch between internal and external symbiont communities suggests that acantharians can be selective in choosing symbionts and probably do not continuously digest and recruit new symbionts, and maintain symbionts for extended periods of time instead. Life cycle. Adults are usually multinucleated. Earlier diverging clades are able to shed their spines and form cysts, which are often referred to as reproductive cysts. Reproduction is thought to take place by formation of swarmer cells (formerly referred to as "spores"), which may be flagellate, and cysts have been observed to release these swarmers. Non-encysted cells have also been seen releasing swarmers in laboratory conditions. Not all life cycle stages have been observed, however, and no one has witnessed the fusion of swarmers to produce a new acantharian. Cysts are often found in sediment traps and it is therefore believed that the cysts help acantharians sink into deep water. Genetic data and some imaging suggests that non-cyst-forming acantharians may also sink to deep water to release swarmers. Releasing swarmer cells in deeper water may improve the survival chances of juveniles. Study of these organisms has been hampered mainly by an inability to "close the lifecycle" and maintain these organisms in culture through successive generations.
2503
African National Congress
The African National Congress (ANC) is a social-democratic political party in South Africa. A liberation movement known for its opposition to apartheid, it has governed the country since 1994, when the first post-apartheid election installed Nelson Mandela as President of South Africa. Cyril Ramaphosa, the incumbent national President, has served as President of the ANC since 18 December 2017. Founded on 8 January 1912 in Bloemfontein as the South African Native National Congress, the organisation was formed to agitate for the rights of black South Africans. When the National Party government came to power in 1948, the ANC's central purpose became to oppose the new government's policy of institutionalised apartheid. To this end, its methods and means of organisation shifted; its adoption of the techniques of mass politics, and the swelling of its membership, culminated in the Defiance Campaign of civil disobedience in 1952–53. The ANC was banned by the South African government between April 1960 – shortly after the Sharpeville massacre – and February 1990. During this period, despite periodic attempts to revive its domestic political underground, the ANC was forced into exile by increasing state repression, which saw many of its leaders imprisoned on Robben Island. Headquartered in Lusaka, Zambia, the exiled ANC dedicated much of its attention to a campaign of sabotage and guerrilla warfare against the apartheid state, carried out under its military wing, uMkhonto we Sizwe, which was founded in 1961 in partnership with the South African Communist Party (SACP). The ANC was condemned as a terrorist organisation by the governments of South Africa, the United States, and the United Kingdom. However, it positioned itself as a key player in the negotiations to end apartheid, which began in earnest after the ban was repealed in 1990. In the post-apartheid era, the ANC continues to identify itself foremost as a liberation movement, although it is also a registered political party. Partly due to its Tripartite Alliance with the South African Communist Party (SACP) and the Congress of South African Trade Unions, it has retained a comfortable electoral majority at the national level and in most provinces, and has provided each of South Africa's five presidents since 1994. South Africa is considered a dominant-party state. However, the ANC's electoral majority has declined consistently since 2004, and in the most recent elections – the 2021 local elections – its share of the national vote dropped below 50% for the first time ever. Over the last decade, the party has been embroiled in a number of controversies, particularly relating to widespread allegations of political corruption among its members. History. Origins. A successor of the Cape Colony's Imbumba Yamanyama organisation, the ANC was founded as the South African Native National Congress in Bloemfontein on 8 January 1912, and was renamed the African National Congress in 1923. Writer and historian, Walter Rubusana founded the organisation alongside Sol Plaatje, John Dube and Pixley ka Isaka Seme who like much of the ANC's early membership, were drawn from the conservative, educated, and religious professional classes of black South African society. Although they would not take part, Xhosa chiefs would show huge support for the organisation. As a result, King Jongilizwe donated 50 cows to during its founding. Around 1920, in a partial shift away from its early focus on the "politics of petitioning", the ANC developed a programme of passive resistance directed primarily at the expansion and entrenchment of pass laws. When Josiah Gumede took over as ANC president in 1927, he advocated for a strategy of mass mobilisation and cooperation with the Communist Party, but was voted out of office in 1930 and replaced with the traditionalist Seme, whose leadership saw the ANC's influence wane. In the 1940s, Alfred Bitini Xuma revived some of Gumede's programmes, assisted by a surge in trade union activity and by the formation in 1944 of the left-wing ANC Youth League under a new generation of activists, among them Walter Sisulu, Nelson Mandela, and Oliver Tambo. After the National Party was elected into government in 1948 on a platform of apartheid, entailing the further institutionalisation of racial segregation, this new generation pushed for a Programme of Action which explicitly advocated African nationalism and led the ANC, for the first time, to the sustained use of mass mobilisation techniques like strikes, stay-aways, and boycotts. This culminated in the 1952–53 Defiance Campaign, a campaign of mass civil disobedience organised by the ANC, the Indian Congress, and the coloured Franchise Action Council in protest of six apartheid laws. The ANC's membership swelled. In June 1955, it was one of the groups represented at the multi-racial Congress of the People in Kliptown, Soweto, which ratified the Freedom Charter, from then onwards a fundamental document in the anti-apartheid struggle. The Charter was the basis of the enduring Congress Alliance, but was also used as a pretext to prosecute hundreds of activists, among them most of the ANC's leadership, in the Treason Trial. Before the trial was concluded, the Sharpeville massacre occurred on 21 March 1960. In the aftermath, the ANC was banned by the South African government. It was not unbanned until February 1990, almost three decades later. Exile in Lusaka. After its banning in April 1960, the ANC was driven underground, a process hastened by a barrage of government banning orders, by an escalation of state repression, and by the imprisonment of senior ANC leaders pursuant to the Rivonia trial and Little Rivonia trial. From around 1963, the ANC effectively abandoned much of even its underground presence inside South Africa and operated almost entirely from its external mission, with headquarters first in Morogoro, Tanzania, and later in Lusaka, Zambia. For the entirety of its time in exile, the ANC was led by Tambo – first "de facto", with president Albert Luthuli under house arrest in Zululand; then in an acting capacity, after Luthuli's death in 1967; and, finally, officially, after a leadership vote in 1985. Also notable about this period was the extremely close relationship between the ANC and the reconstituted South African Communist Party (SACP), which was also in exile. uMkhonto we Sizwe. In 1961, partly in response to the Sharpeville massacre, leaders of the SACP and the ANC formed a military body, Umkhonto we Sizwe (MK, "Spear of the Nation"), as a vehicle for armed struggle against the apartheid state. Initially, MK was not an official ANC body, nor had it been directly established by the ANC National Executive: it was considered an autonomous organisation, until such time as the ANC formally recognised it as its armed wing in October 1962. In the first half of the 1960s, MK was preoccupied with a campaign of sabotage attacks, especially bombings of unoccupied government installations. As the ANC reduced its presence inside South Africa, however, MK cadres were increasingly confined to training camps in Tanzania and neighbouring countries – with such exceptions as the Wankie Campaign, a momentous military failure. In 1969, Tambo was compelled to call the landmark Morogoro Conference to address the grievances of the rank-and-file, articulated by Chris Hani in a memorandum which depicted MK's leadership as corrupt and complacent. Although MK's malaise persisted into the 1970s, conditions for armed struggle soon improved considerably, especially after the Soweto uprising of 1976 in South Africa saw thousands of students – inspired by Black Consciousness ideas – cross the borders to seek military training. MK guerrilla activity inside South Africa increased steadily over this period, with one estimate recording an increase from 23 incidents in 1977 to 136 incidents in 1985. In the latter half of the 1980s, a number of South African civilians were killed in these attacks, a reversal of the ANC's earlier reluctance to incur civilian casualties. Fatal attacks included the 1983 Church Street bombing, the 1985 Amanzimtoti bombing, the 1986 Magoo's Bar bombing, and the 1987 Johannesburg Magistrate's Court bombing. Partly in retaliation, the South African Defence Force increasingly crossed the border to target ANC members and ANC bases, as in the 1981 raid on Maputo, 1983 raid on Maputo, and 1985 raid on Gaborone. During this period, MK activities led the governments of Margaret Thatcher and Ronald Reagan to condemn the ANC as a terrorist organisation. In fact, neither the ANC nor Mandela were removed from the U.S. terror watch list until 2008. The animosity of Western regimes was partly explained by the Cold War context, and by the considerable amount of support – both financial and technical – that the ANC received from the Soviet Union. Negotiations to end apartheid. From the mid-1980s, as international and internal opposition to apartheid mounted, elements of the ANC began to test the prospects for a negotiated settlement with the South African government, although the prudence of abandoning armed struggle was an extremely controversial topic within the organisation. Following preliminary contact between the ANC and representatives of the state, business, and civil society, President F. W. de Klerk announced in February 1990 that the government would unban the ANC and other banned political organisations, and that Mandela would be released from prison. Some ANC leaders returned to South Africa from exile for so-called "talks about talks", which led in 1990 and 1991 to a series of bilateral accords with the government establishing a mutual commitment to negotiations. Importantly, the Pretoria Minute of August 1990 included a commitment by the ANC to unilaterally suspend its armed struggle. This made possible the multi-party Convention for a Democratic South Africa and later the Multi-Party Negotiating Forum, in which the ANC was regarded as the main representative of the interests of the anti-apartheid movement. However, ongoing political violence, which the ANC attributed to a state-sponsored third force, led to recurrent tensions. Most dramatically, after the Boipatong massacre of June 1992, the ANC announced that it was withdrawing from negotiations indefinitely. It faced further casualties in the Bisho massacre, the Shell House massacre, and in other clashes with state forces and supporters of the Inkatha Freedom Party (IFP). However, once negotiations resumed, they resulted in November 1993 in an interim Constitution, which governed South Africa's first democratic elections on 27 April 1994. In the elections, the ANC won an overwhelming 62.65% majority of the vote. Mandela was elected president and formed a coalition Government of National Unity, which, under the provisions of the interim Constitution, also included the National Party and IFP. The ANC has controlled the national government since then. Breakaways. In the post-apartheid era, two significant breakaway groups have been formed by former ANC members. The first is the Congress of the People, founded by Mosiuoa Lekota in 2008 in the aftermath of the Polokwane elective conference, when the ANC declined to re-elect Thabo Mbeki as its president and instead compelled his resignation from the national presidency. The second breakaway is the Economic Freedom Fighters, founded in 2013 after youth leader Julius Malema was expelled from the ANC. Before these, the most important split in the ANC's history occurred in 1959, when Robert Sobukwe led a splinter faction of African nationalists to the new Pan Africanist Congress. Current structure and composition. Leadership. Under the ANC constitution, every member of the ANC belongs to a local branch, and branch members select the organisation's policies and leaders. They do so primarily by electing delegates to the National Conference, which is currently convened every five years. Between conferences, the organisation is led by its 86-member National Executive Committee, which is elected at each conference. The most senior members of the National Executive Committee are the so-called Top Six officials, the ANC president primary among them. A symmetrical process occurs at the subnational levels: each of the nine provincial executive committees and regional executive committees are elected at provincial and regional elective conferences respectively, also attended by branch delegates; and branch officials are elected at branch general meetings. Leagues. The ANC has three leagues: the Women's League, the Youth League and the Veterans' League. Under the ANC constitution, the leagues are autonomous bodies with the scope to devise their own constitutions and policies; for the purpose of national conferences, they are treated somewhat like provinces, with voting delegates and the power to nominate leadership candidates. Tripartite Alliance. The ANC is recognised as the leader of a three-way alliance, known as the Tripartite Alliance, with the SACP and Congress of South African Trade Unions (COSATU). The alliance was formalised in mid-1990, after the ANC was unbanned, but has deeper historical roots: the SACP had worked closely with the ANC in exile, and COSATU had aligned itself with the Freedom Charter and Congress Alliance in 1987. The membership and leadership of the three organisations has traditionally overlapped significantly. The alliance constitutes a "de facto" electoral coalition: the SACP and COSATU do not contest in government elections, but field candidates through the ANC, hold senior positions in the ANC, and influence party policy. However, the SACP, in particular, has frequently threatened to field its own candidates, and in 2017 it did so for the first time, running against the ANC in by-elections in the Metsimaholo municipality, Free State. Electoral candidates. Under South Africa's closed-list proportional representation electoral system, parties have immense power in selecting candidates for legislative bodies. The ANC's internal candidate selection process is overseen by so-called list committees and tends to involve a degree of broad democratic participation, especially at the local level, where ANC branches vote to nominate candidates for the local government elections. Between 2003 and 2008, the ANC also gained a significant number of members through the controversial floor crossing process, which occurred especially at the local level. The leaders of the executive in each sphere of government – the president, the provincial premiers, and the mayors – are indirectly elected after each election. In practice, the selection of ANC candidates for these positions is highly centralised, with the ANC caucus voting together to elect a pre-decided candidate. Although the ANC does not always announce whom its caucuses intend to elect, the National Assembly has thus far always elected the ANC president as the national president. Cadre deployment. The ANC has adhered to a formal policy of cadre deployment since 1985. In the post-apartheid era, the policy includes but is not exhausted by selection of candidates for elections and government positions: it also entails that the central organisation "deploys" ANC members to various other strategic positions in the party, state, and economy. Ideology and policies. The ANC prides itself on being a broad church, and, like many dominant parties, resembles a catch-all party, accommodating a range of ideological tendencies. As Mandela told the "Washington Post" in 1990:The ANC has never been a political party. It was formed as a parliament of the African people. Right from the start, up to now, the ANC is a coalition, if you want, of people of various political affiliations. Some will support free enterprise, others socialism. Some are conservatives, others are liberals. We are united solely by our determination to oppose racial oppression. That is the only thing that unites us. There is no question of ideology as far as the odyssey of the ANC is concerned, because any question approaching ideology would split the organization from top to bottom. Because we have no connection whatsoever except at this one, of our determination to dismantle apartheid. The post-apartheid ANC continues to identify itself foremost as a liberation movement, pursuing "the complete liberation of the country from all forms of discrimination and national oppression". It also continues to claim the Freedom Charter of 1955 as "the basic policy document of the ANC". However, as NEC member Jeremy Cronin noted in 2007, the various broad principles of the Freedom Charter have been given different interpretations, and emphasised to differing extents, by different groups within the organisation. Nonetheless, some basic commonalities are visible in the policy and ideological preferences of the organisation's mainstream. Non-racialism. The ANC is committed to the ideal of non-racialism and to opposing "any form of racial, tribalistic or ethnic exclusivism or chauvinism". National Democratic Revolution. The 1969 Morogoro Conference committed the ANC to a "national democratic revolution [which] – destroying the existing social and economic relationship – will bring with it a correction of the historical injustices perpetrated against the indigenous majority and thus lay the basis for a new – and deeper internationalist – approach". For the movement's intellectuals, the concept of the National Democratic Revolution (NDR) was a means of reconciling the anti-apartheid and anti-colonial project with a second goal, that of establishing domestic and international socialism – the ANC is a member of the Socialist International, and its close partner the SACP traditionally conceives itself as a vanguard party. Specifically, and as implied by the 1969 document, NDR doctrine entails that the transformation of the domestic political system (national struggle, in Joe Slovo's phrase) is a precondition for a socialist revolution (class struggle). The concept remained important to ANC intellectuals and strategists after the end of apartheid. Indeed, the pursuit of the NDR is one of the primary objectives of the ANC as set out in its constitution. As with the Freedom Charter, the ambiguity of the NDR has allowed it to bear varying interpretations. For example, whereas SACP theorists tend to emphasise the anti-capitalist character of the NDR, some ANC policymakers have construed it as implying the empowerment of the black majority even within a market-capitalist scheme. Economic interventionism. Since 1994, consecutive ANC governments have held a strong preference for a significant degree of state intervention in the economy. The ANC's first comprehensive articulation of its post-apartheid economic policy framework was set out in the Reconstruction and Development Programme (RDP) document of 1994, which became its electoral manifesto and also, under the same name, the flagship policy of Nelson Mandela's government. The RDP aimed both to redress the socioeconomic inequalities created by colonialism and apartheid, and to promote economic growth and development; state intervention was judged a necessary step towards both goals. Specifically, the state was to intervene in the economy through three primary channels: a land reform programme; a degree of economic planning, through industrial and trade policy; and state investments in infrastructure and the provision of basic services, including health and education. Although the RDP was abandoned in 1996, these three channels of state economic intervention have remained mainstays of subsequent ANC policy frameworks. Neoliberal turn. In 1996, Mandela's government replaced the RDP with the Growth Employment and Redistribution (GEAR) programme, which was maintained under President Thabo Mbeki, Mandela's successor. GEAR has been characterised as a neoliberal policy, and it was disowned by both COSATU and the SACP. While some analysts viewed Mbeki's economic policy as undertaking the uncomfortable macroeconomic adjustments necessary for long-term growth, others – notably Patrick Bond – viewed it as a reflection of the ANC's failure to implement genuinely radical transformation after 1994. Debate about ANC commitment to redistribution on a socialist scale has continued: in 2013, the country's largest trade union, the National Union of Metalworkers of South Africa, withdrew its support for the ANC on the basis that "the working class cannot any longer see the ANC or the SACP as its class allies in any meaningful sense". It is evident, however, that the ANC never embraced free-market capitalism, and continued to favour a mixed economy: even as the debate over GEAR raged, the ANC declared itself (in 2004) a social-democratic party, and it was at that time presiding over phenomenal expansions of its black economic empowerment programme and the system of social grants. Developmental state. As its name suggests, the RDP emphasised state-led development – that is, a developmental state – which the ANC has typically been cautious, at least in its rhetoric, to distinguish from the neighbouring concept of a welfare state. In the mid-2000s, during Mbeki's second term, the notion of a developmental state was revived in South African political discourse when the national economy worsened; and the 2007 National Conference whole-heartedly endorsed developmentalism in its policy resolutions, calling for a state "at the centre of a mixed economy... which leads and guides that economy and which intervenes in the interest of the people as a whole". The proposed developmental state was also central to the ANC's campaign in the 2009 elections, and it remains a central pillar of the policy of the current government, which seeks to build a "capable and developmental" state. In this regard, ANC politicians often cite China as an aspirational example. A discussion document ahead of the ANC's 2015 National General Council proposed that:China['s] economic development trajectory remains a leading example of the triumph of humanity over adversity. The exemplary role of the collective leadership of the Communist Party of China in this regard should be a guiding lodestar of our own struggle. Radical economic transformation. Towards the end of Jacob Zuma's presidency, an ANC faction aligned to Zuma pioneered a new policy platform referred to as radical economic transformation (RET). Zuma announced the new focus on RET during his February 2017 State of the Nation address, and later that year, explaining that it had been adopted as ANC policy and therefore as government policy, defined it as entailing "fundamental change in the structures, systems, institutions and patterns of ownership and control of the economy, in favour of all South Africans, especially the poor". Arguments for RET were closely associated with the rhetorical concept of white monopoly capital. At the 54th National Conference in 2017, the ANC endorsed a number of policy principles advocated by RET supporters, including their proposal to pursue land expropriation without compensation as a matter of national policy. Foreign policy and relations. The ANC has long had close ties with China and the Chinese Communist Party (CCP), with the CCP having supported ANC's struggle of apartheid since 1961. In 2008, the two parties signed a memorandum of understanding to train ANC members in China. President Cyril Ramaphosa and the ANC have not condemned the Russian invasion of Ukraine, and have faced criticism from opposition parties, public commentators, academics, civil society organisations, and former ANC members due to this. The ANC youth wing has meanwhile condemned sanctions against Russia and denounced NATO's eastward expansion as "fascistic". Officials representing the ANC Youth League acted as international observers for Russia's staged referendum to annex Ukrainian territory conquered during the war. Symbols and media. Flag and logo. The logo of the ANC incorporates a spear and shield – symbolising the historical and ongoing struggle, armed and otherwise, against colonialism and racial oppression – and a wheel, which is borrowed from the 1955 Congress of the People campaign and therefore symbolises a united and non-racial movement for freedom and equality. The logo uses the same colours as the ANC flag, which comprises three horizontal stripes of equal width in black, green and gold. The black symbolises the native people of South Africa; the green represents the land of South Africa; and the gold represents the country's mineral and other natural wealth. The black, green and gold tricolour also appeared on the flag of the KwaZulu bantustan and appears on the flag of the ANC's rival, the IFP; and all three colours appear in the post-apartheid South African national flag. Publications. Since 1996, the ANC Department of Political Education has published the quarterly "Umrabulo" political discussion journal; and "ANC Today", a weekly online newsletter, was launched in 2001 to offset the alleged bias of the press. In addition, since 1972, it has been traditional for the ANC president to publish annually a so-called January 8 Statement: a reflective letter sent to members on 8 January, the anniversary of the organisation's founding. In earlier years, the ANC published a range of periodicals, the most important of which was the monthly journal "Sechaba" (1967–1990), printed in the German Democratic Republic and banned by the apartheid government. The ANC's Radio Freedom also gained a wide audience during apartheid. Amandla. "Amandla ngawethu", or the Sotho variant "Matla ke arona", is a common rallying call at ANC meetings, roughly meaning "power to the people". It is also common for meetings to sing so-called struggle songs, which were sung during anti-apartheid meetings and in MK camps. In the case of at least two of these songs – "Dubula ibhunu" and "Umshini wami" – this has caused controversy in recent years. Criticism and controversy. Corruption controversies. The most prominent corruption case involving the ANC relates to a series of bribes paid to companies involved in the ongoing R55 billion Arms Deal saga, which resulted in a long term jail sentence to then Deputy President Jacob Zuma's legal adviser Schabir Shaik. Zuma, the former South African President, was charged with fraud, bribery and corruption in the Arms Deal, but the charges were subsequently withdrawn by the National Prosecuting Authority of South Africa due to their delay in prosecution. The ANC has also been criticised for its subsequent abolition of the Scorpions, the multidisciplinary agency that investigated and prosecuted organised crime and corruption, and was heavily involved in the investigation into Zuma and Shaik. Tony Yengeni, in his position as chief whip of the ANC and head of the Parliaments defence committee has recently been named as being involved in bribing the German company ThyssenKrupp over the purchase of four corvettes for the SANDF. Other recent corruption issues include the sexual misconduct and criminal charges of Beaufort West municipal manager Truman Prince, and the Oilgate scandal, in which millions of Rand in funds from a state-owned company were funnelled into ANC coffers. The ANC has also been accused of using government and civil society to fight its political battles against opposition parties such as the Democratic Alliance. The result has been a number of complaints and allegations that none of the political parties truly represent the interests of the poor. This has resulted in the "No Land! No House! No Vote!" Campaign which became very prominent during elections. In 2018, the "New York Times" reported on the killings of ANC corruption whistleblowers. During an address on 28 October 2021, former president Thabo Mbeki commented on the history of corruption within the ANC. He reflected that Mandela had already warned in 1997 that the ANC was attracting individuals who viewed the party as "a route to power and self-enrichment." He added that the ANC leadership "did not know how to deal with this problem." During a lecture on 10 December, Mbeki reiterated concerns about "careerists" within the party, and stressed the need to "purge itself of such members". Condemnation over Secrecy Bill. In late 2011 the ANC was heavily criticised over the passage of the Protection of State Information Bill, which opponents claimed would improperly restrict the freedom of the press. Opposition to the bill included otherwise ANC-aligned groups such as COSATU. Notably, Nelson Mandela and other Nobel laureates Nadine Gordimer, Archbishop Desmond Tutu, and F. W. de Klerk have expressed disappointment with the bill for not meeting standards of constitutionality and aspirations for freedom of information and expression. Role in the Marikana killings. The ANC have been criticised for its role in failing to prevent 16 August 2012 massacre of Lonmin miners at Marikana in the North West. Some allege that Police Commissioner Riah Phiyega and Police Minister Nathi Mthethwa may have given the go ahead for the police action against the miners on that day. Commissioner Phiyega of the ANC came under further criticism as being insensitive and uncaring when she was caught smiling and laughing during the Farlam Commission's video playback of the 'massacre'. Archbishop Desmond Tutu has announced that he no longer can bring himself to exercise a vote for the ANC as it is no longer the party that he and Nelson Mandela fought for, and that the party has now lost its way, and is in danger of becoming a corrupt entity in power. Financial mismanagement. Since at least 2017, the ANC has encountered significant problems related to financial mismanagement. According to a report filed by the former treasurer-general Zweli Mkhize in December 2017, the ANC was technically insolvent as its liabilities exceeded its assets. These problems continued into the second half of 2021. By September 2021, the ANC had reportedly amassed a debt exceeding R200-million, including over R100-million owed to the South African Revenue Service. Beginning in May 2021, the ANC failed to pay monthly staff salaries on time. Having gone without pay for three consecutive months, workers planned a strike in late August 2021. In response, the ANC initiated a crowdfunding campaign to raise money for staff salaries. By November 2021, its Cape Town staff was approaching their fourth month without salaries, while medical aid and provident fund contributions had been suspended in various provinces. The party has countered that the Political Party Funding Act, which prohibits anonymous contributions, has dissuaded some donors who previously injected money for salaries. State capture. In January 2018, then-President Jacob Zuma established the Zondo Commission to investigate allegations of state capture, corruption, and fraud in the public sector. Over the following four years, the Commission heard testimony from over 250 witnesses and collected more than 150,000 pages of evidence. After several extensions, the first part of the final three-part report was published on 4 January 2022. The report found that the ANC, including Zuma and his political allies, had benefited from the extensive corruption of state enterprises, including the South African Revenue Service. It also found that the ANC "simply did not care that state entities were in decline during state capture or they slept on the job – or they simply didn't know what to do."
2504
Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered in 1887 and exists as two enantiomers: levoamphetamine and dextroamphetamine. "Amphetamine" properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Uses. Medical. Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. Enhancing performance. Cognitive performance. In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Physical performance. Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. Recreational. Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine’s dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops. Contraindications. According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. Adverse effects. Physical. Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. Psychological. At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. Reinforcement disorders. Addiction. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction. Biomolecular mechanisms. Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are "Delta FBJ murine osteosarcoma viral oncogene homolog B" (ΔFosB), "cAMP response element binding protein" (CREB), and "nuclear factor-kappa B" (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. Pharmacological treatments. there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil. Behavioral treatments. A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. Dependence and withdrawal. Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect. According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. Overdose. An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Toxicity. In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. Psychosis. An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use. Drug interactions. Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Pharmacology. Pharmacodynamics. Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival "in vitro". The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. Dopamine. In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Norepinephrine. Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Serotonin. Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Other neurotransmitters, peptides, hormones, and enzymes. Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma. Pharmacokinetics. The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a p"K"a of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine "N"-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, "N"-oxidation, "N"-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Pharmacomicrobiomics. The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics. Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of "E. coli" commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds. Related endogenous compounds. Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine "N"-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Chemistry. Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . Substituted derivatives. The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Synthesis. Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Detection in body fluids. Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. History, society, and culture. Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it "phenylisopropylamine"; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. Legal status. As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Pharmaceutical products. Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below.
2506
Asynchronous communication
In telecommunications, asynchronous communication is transmission of data, generally without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream. Any timing required to recover data from the communication symbols is encoded within the symbols. The most significant aspect of asynchronous communications is that data is not transmitted at regular intervals, thus making possible variable bit rate, and that the transmitter and receiver clock generators do not have to be exactly synchronized all the time. In asynchronous transmission, data is sent one byte at a time and each byte is preceded by start and stop bits. Physical layer. In asynchronous serial communication in the physical protocol layer, the data blocks are code words of a certain word length, for example octets (bytes) or ASCII characters, delimited by start bits and stop bits. A variable length space can be inserted between the code words. No bit synchronization signal is required. This is sometimes called character oriented communication. Examples include MNP2 and modems older than V.2. Data link layer and higher. Asynchronous communication at the data link layer or higher protocol layers is known as statistical multiplexing, for example Asynchronous Transfer Mode (ATM). In this case, the asynchronously transferred blocks are called data packets, for example ATM cells. The opposite is circuit switched communication, which provides constant bit rate, for example ISDN and SONET/SDH. The packets may be encapsulated in a data frame, with a frame synchronization bit sequence indicating the start of the frame, and sometimes also a bit synchronization bit sequence, typically 01010101, for identification of the bit transition times. Note that at the physical layer, this is considered as synchronous serial communication. Examples of packet mode data link protocols that can be/are transferred using synchronous serial communication are the HDLC, Ethernet, PPP and USB protocols. Application layer. An asynchronous communication service or application does not require a constant bit rate. Examples are file transfer, email and the World Wide Web. An example of the opposite, a synchronous communication service, is realtime streaming media, for example IP telephony, IPTV and video conferencing. Electronically mediated communication. Electronically mediated communication often happens asynchronously in that the participants do not communicate concurrently. Examples include email and bulletin-board systems, where participants send or post messages at different times. The term "asynchronous communication" acquired currency in the field of online learning, where teachers and students often exchange information asynchronously instead of synchronously (that is, simultaneously), as they would in face-to-face or in telephone conversations.
2508
Artillery
Artillery is a class of heavy military ranged weapons that launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower. Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armor. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannon, and in contemporary usage, usually refers to shell-firing guns, howitzers, and mortars (collectively called "barrel artillery", "cannon artillery" or "gun artillery") and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions. By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, and with some nations coastal has been a naval or marine responsibility. In the 20th century, technology-based target acquisition devices (such as radar) and systems (such as sound ranging and flash spotting) emerged in order to acquire targets, primarily for artillery. These are usually operated by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological, and in some armies, provision of these are the responsibility of the artillery arm. Artillery has been used since at least the early Industrial Revolution. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the god of war". Artillery piece. Although not called as such, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy which not only severely limited the kinetic energy of the projectiles, it also required the construction of very large engines to accumulate sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16 kilojoules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240 kilojoules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350 megajoules. From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare. Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today. In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms. Crew. Some armed forces use the term "gunners" for the soldiers and sailors with the primary function of using artillery. The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank, and junior non-commissioned officers are "Bombardiers" in some artillery arms. Batteries are roughly equivalent to a company in the infantry, and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps. The term "artillery" also designates a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons. Tactics. During military operations, field artillery has the role of providing support to other arms in combat or of attacking targets, particularly in-depth. Broadly, these effects fall into two categories, aiming either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and from blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also suppress or neutralize the enemy by obscuring their view. Fire may be directed by an artillery observer or another observer, including crewed and uncrewed aircraft, or called onto map coordinates. Military doctrine has had a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordnance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery. The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, used primarily to attack or defend a fixed-line; and the self-propelled gun, intended to accompany a mobile force and to provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part is close-quarters combat, with the possible exception of artillery reconnaissance teams. Etymology. The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from French "atelier", meaning the place where manual work is done. Another suggestion is that it originates from the 13th century and the Old French "", designating craftsmen and manufacturers of all materials and warfare equipments (spears, swords, armor, war machines); and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century. Another suggestion is that it comes from the Italian "arte de tirare" (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia. History. Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery. Medieval. During medieval times, more types of artillery were developed, most notably the trebuchet. Traction trebuchets, using manpower to launch projectiles, have been used in ancient China since the 4th century as anti-personnel weapons. However, in the 12th century, the counterweight trebuchet was introduced, with the earliest mention of it being in 1187. Invention of gunpowder. Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming Dynasty treatise "Huolongjing". With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the "madfaa") and reached Europe in the 13th century, in a very limited manner. In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out. As small smooth-bore barrels, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannon were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders. Expansion of use. In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely "bombardas", "colebratas", and "falconetes". In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces. The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannon were only powerful enough to knock in roofs, and could not penetrate castle walls. However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. The cannon during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time. Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery. The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444. Early cannon were not always reliable; King James II of Scotland was killed by the accidental explosion of one of his own cannon, imported from Flanders, at the siege of Roxburgh Castle in 1460. The able use of artillery supported to a large measure the expansion and defense of the Portuguese Empire, as it was a necessary tool that allowed the Portuguese to face overwhelming odds both on land and sea from Morocco to Asia. In great sieges and in sea battles, the Portuguese demonstrated a level of proficiency in the use of artillery after the beginning of the 16th century unequalled by contemporary European neighbours, in part due to the experience gained in intense fighting in Morocco, which served as a proving ground for artillery and its practical application, and made Portugal a forerunner in gunnery for decades. During the reign of King Manuel (1495–1521) at least 2017 cannon were sent to Morocco for garrison defense, with more than 3000 cannon estimated to have been required during that 26 year period. An especially noticeable division between siege guns and anti-personnel guns inhanced the use and effectiveness of Portuguese firearms above contemporary powers, making cannon the most essential element in the Portuguese arsenal. The three major classes of Portuguese artillery were anti-personnel guns with a high borelenght (including: "rebrodequim", "berço", "falconete", "falcão", "sacre", "áspide", "cão", "serpentina" and "passavolante"); bastion guns which could batter fortifications ("camelete", "leão", "pelicano", "basilisco", "águia", "camelo", "roqueira", "urso"); and howitzers that fired large stone cannonballs in an elevated arch, weighted up to 4000 pounds and could fire incendiary devices, such as a hollow iron ball filled with pitch and fuse, designed to be fired at close range and burst on contact. The most popular in Portuguese arsenals was the "berço", a 5cm, one pounder bronze breech-loading cannon that weighted 150kg with an effective range of 600 meters. A tactical innovation the Portuguese introduced in fort defense was the use of combinations of projectiles against massed assaults. Although cannister shells had been developed in the early 15th century, the Portuguese were the first to employ them extensively, and Portuguese engineers invented a cannister round which consisted of a thin lead case filled with iron pellets, that broke up at the muzzle and scattered its contents in a narrow pattern. An innovation which Portugal adopted in advance of other European powers was fuse-delayed action shells, and were commonly used in 1505. Although dangerous, their effectiveness meant a sixth of all rounds used by the Portuguese in Morocco were of the fused-shell variety. The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation." When the Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms were already using cannons. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannons (cetbang or rentaka), long muskets, "spingarde" (arquebus), "schioppi" (hand cannon), Greek fire, guns (cannons), and other fire-works. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By the early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180 and 260 pounders, weighing anywhere between 3–8 tons, measuring between 3–6 m. Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used "Kunikuzushi" (国崩し – Japanese breech-loading swivel gun) and "Ōzutsu" (大筒 – large size Tanegashima) as their largest firearms. Smoothbores. Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman. Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel. The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier. The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance. The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry. Shells, explosive-filled fused projectiles, were in use by the 15th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel "ribauldequin" (known as "organ guns"), were also produced. The 1650 book by Kazimierz Siemienowicz "Artis Magnae Artilleriae pars prima" was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual. One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come. Modern rocket artillery can trace its heritage back to the Mysorean rockets of India. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799 these rockets were used with considerable effect against the British." After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812. Napoleonic. With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specially trained artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault. Physically, cannons continued to become smaller and lighter. During the Seven Years War, King Frederick II of Prussia used these advances to deploy horse artillery that could move throughout the battlefield. Frederick also introduced the reversible iron ramrod, which was much more resistant to breakage than older wooden designs. The reversibility aspect also helped increase the rate of fire, since a soldier would no longer have to worry about what end of the ramrod they were using. Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles. Modern. The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity. After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars, the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out. First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge. His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil. What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel. Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities. Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns. The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. The gun used cased ammunition, was breech-loading, had modern sights, and a self-contained firing mechanism. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match. Indirect fire. Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired shrapnel indirectly against advancing French troops. In 1882, Russian Lieutenant Colonel KG Guk published "Indirect Fire for Field Artillery", which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer". A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US). The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight. In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I. An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used. In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st century with refinements to calculations enabled by computers and improved data capture about non-standard conditions. The British major-general Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael). Major General J.B.A. Bailey, British Army (retired) wrote: An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I. Precision-guidance. Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process. Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries. The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction. "See Long Range Precision Fires (LRPF), Joint terminal attack controller" Ammunition. One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine ) and the provision of fuzes, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell. A round of artillery ammunition comprises four components: Fuzes. Fuzes are the devices that initiate an artillery projectile, either to detonate its High Explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types: Most artillery fuzes are nose fuzes. However, base fuzes have been used with armor-piercing shells and for squash head (High-Explosive Squash Head (HESH) or High Explosive, Plastic (HEP) anti-tank shells). At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base. Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor or Concrete-Piercing (AP or CP) fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst. HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length. Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions. Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher. The first radar proximity fuzes (perhaps originally codenamed 'VT' and later called Variable Time (VT)) were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts. However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'Controlled Variable Time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact. The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against "kamikaze" as well as in Britain against V-1 flying bombs. Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter. Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths. A new type of artillery fuze will appear soon. In addition to other functions these offer some course correction capability, not full precision but sufficient to significantly reduce the dispersion of the shells on the ground. Projectiles. The projectile is the munition or "bullet" fired downrange. This may be an explosive device. Projectiles have traditionally been classified as "shot" or "shell", the former being solid and the latter having some form of "payload". Shells can be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time. Payloads have included: Propellant. Most forms of artillery require a propellant to propel the projectile at the target. Propellant is always a low explosive, which means it deflagrates, rather than detonating like high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor. Until the late 19th century, the only available propellant was black powder. It had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant. Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant – low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Modern gun propellants are broadly divided into three classes: single-base propellants that are mainly or entirely nitrocellulose based, double-base propellants consisting of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and nitroguanidine. Artillery shells fired from a barrel can be assisted to greater range in three ways: Propelling charges for barrel artillery can be provided either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Cases and bags require totally different types of breech. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech. Artillery ammunition has four classifications according to use: Field artillery system. Because modern field artillery mostly uses indirect fire, the guns have to be part of a system that enables them to attack targets invisible to them, in accordance with the combined arms plan. The main functions in the field artillery system are: All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates. NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4). Logistics. Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack. Classification. Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements. Types of ordnance. The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles. Types of artillery: Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection. Organizational types. At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships: After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles: However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role. Equipment types. The three main types of artillery "gun" are guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies. The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century. True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun. Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately). That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipment had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels. Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading. There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations. A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry. A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s. In SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armored SPs usually carry a useful ammunition load. Early armored SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine. Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s. Caliber categories. A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns. Modern operations. List of countries in order of amount of artillery (only conventional barrel ordnance is given, in use with land forces): Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide "fire support"—"the application of fire, coordinated with the manoeuvre of forces to destroy, "neutralize" or "suppress" the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The "italicised" terms are NATO's. Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering "close supporting fire". However, they are all suitable for providing "deep supporting fire" although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to "direct supporting fire". Guns are used either for this or "general supporting fire" while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies. Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armor units. This means they generally do not have to "concentrate" their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO. In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: "direct support", "general support", "general support reinforcing" and "reinforcing". These tactical missions are in the context of the command authority: "operational command", "operational control", "tactical command" or "tactical control". In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be "concentrated" onto a single target, as can the fire of units in range and with the other tactical missions. Application of fire. There are several dimensions to this subject. The first is the notion that fire may be against an "opportunity" target or may be "arranged". If it is the latter it may be either "on-call" or "scheduled". Arranged targets may be part of a "fire plan". Fire may be either "observed" or "unobserved", if the former it may be "adjusted", if the latter then it has to be "predicted". Observation of adjusted fire may be directly by a forward observer or indirectly via some other "target acquisition" system. NATO also recognises several different types of fire support for tactical purposes: These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of "suppression" in "counterbattery" is an omission. Broadly they can be defined as either: Two other NATO terms also need definition: The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations". "Targeting" is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about: The "targeting" process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances. Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows: Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used. There are several ways of making best use of this brief window of maximum vulnerability: Counter-battery fire. Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon. Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter. Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy. Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically. Field artillery team. 'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the Forward Observer (FO), the Fire Direction Center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use a(n) "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them. The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires. The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link. Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy. Time on target. A technique called time on target (TOT) was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires. Multiple round simultaneous impact. Multiple round simultaneous impact (MRSI) is a modern version of the earlier time on target concept. MRSI is when a single gun fires multiple shells so all arrive at the same target simultaneously. This is possible because there is more than one trajectory for a round to fly to any given target. Typically one is below 45 degrees from horizontal and the other is above it, and by using different sized propellant charges with each shell, it is possible to utilize more than two trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target. If shells are fired on higher trajectories for initial volleys (starting with the shell with the most propellant and working down) and later volleys are fired on the lower trajectories, with the correct timing the shells will all arrive at the same target simultaneously. This is useful because many more shells can land on the target with no warning. With traditional methods of firing, the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver multiple rounds in a few seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target. MRSI has a few prerequisites. The first is guns with a high rate of fire. The second is the ability to use different sized propellant charges. Third is a fire control computer that has the ability to compute MRSI volleys and the capability to produce firing data, sent to each gun, and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire. To allow the most shells to reach the target, the target has to be in range of the lowest propellant charge. Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA model 2000, and K9 Thunder. The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities. Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators. Air burst. The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuzes use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuze can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuze to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuze itself was developed by the British to increase the effectiveness of anti-aircraft warfare. This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open; even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position. Use in monuments. Numerous war memorials around the world incorporate an artillery piece that was used in the war or battle commemorated.
2510
Arnulf of Carinthia
Arnulf of Carinthia ( 850 – 8 December 899) was the duke of Carinthia who overthrew his uncle Emperor Charles the Fat to become the Carolingian king of East Francia from 887, the disputed king of Italy from 894 and the disputed emperor from February 22, 896, until his death at Regensburg, Bavaria. Early life. Illegitimacy and early life. Arnulf was the illegitimate son of Carloman of Bavaria, and Liutswind, who may have been the sister of Ernst, Count of the Bavarian Nordgau Margraviate, in the area of the Upper Palatinate, or perhaps the burgrave of Passau, according to other sources. After Arnulf's birth, Carloman married, before 861, a daughter of that same Count Ernst, who died after 8 August 879. As it is mainly West-Franconian historiography that speaks of Arnulf's illegitimacy, it is quite possible that the two women are actually the same person, Liutswind, and that Carloman married Arnulf's mother, thus legitimizing his son. Arnulf was granted the rule over the Duchy of Carinthia, a Frankish vassal state and successor of the ancient Principality of Carantania by his father, after Carloman reconciled with his own father, King Louis the German, and was made king in the Duchy of Bavaria. Arnulf spent his childhood in "Mosaburch" or Mosapurc, which is widely believed to be Moosburg in Carinthia. Moosburg was a few miles away from one of the imperial residences, the Carolingian Kaiserpfalz at Karnburg, which had been the residence of the Carantanian princes. Arnulf kept his seat here, and from later events it may be inferred that the Carantanians, from an early time, treated him as their own duke. Later, after he had been crowned King of East Francia, Arnulf turned his old territory of Carinthia into the March of Carinthia, a part of the Duchy of Bavaria. Regional ruler. After Carloman was incapacitated by a stroke in 879, Louis the Younger inherited Bavaria, Charles the Fat was given the Kingdom of Italy, and Arnulf was confirmed in Carinthia by an agreement with Carloman. However, Bavaria was more or less ruled by Arnulf during the summer and autumn of 879 while his father arranged his succession. He was also granted "Pannonia," in the words of the "Annales Fuldenses", or "Carantanum," in the words of Regino of Prüm. The division of the realm was confirmed in 880 after Carloman's death. When Engelschalk II of Pannonia in 882 rebelled against Margrave Aribo and ignited the Wilhelminer War, Arnulf supported him and accepted his and his brother's homage. This ruined Arnulf's relationship with his uncle, Emperor Charles the Fat, and put him at war with Svatopluk of Moravia. Pannonia was invaded, but Arnulf refused to give up the young Wilhelminers. Arnulf did not make peace with Svatopluk until late 885, by which time the Moravian ruler was loyal to the emperor. Some scholars see this war as destroying Arnulf's hopes of succeeding Charles the Fat. King of East Francia. Arnulf took the leading role in the deposition of Charles the Fat. With the support of the Frankish nobles, Arnulf called a Diet at Tribur and deposed Charles in November 887, under threat of military action. Charles peacefully agreed to this involuntary retirement, but not without first chastising his nephew for his treachery and asking for a few royal villas in Swabia on which to live out his final months, which Arnulf granted him. Arnulf, having distinguished himself in the war against the Slavs, was then elected king by the nobles of East Francia (only the eastern realm, though Charles had ruled the whole of the Frankish Empire). West Francia, the Kingdom of Burgundy, and the Kingdom of Italy elected their own kings from the Carolingian family. Like many rulers of the period, Arnulf was heavily involved in ecclesiastical disputes. In 895, at the Diet of Tribur, he presided over a dispute between the Episcopal sees of Bremen, Hamburg and Cologne over jurisdictional authority, which saw Bremen and Hamburg remain a combined see, independent of the see of Cologne. Arnulf was more a fighter than a negotiator. In 890 he was successfully battling Slavs in Pannonia. In early/mid-891, Vikings invaded Lotharingia and crushed an East Frankish army at Maastricht. Terms such as "Vikings", "Danes", "Northmen" and "Norwegians" have been used loosely and interchangeably to describe these invaders. In September 891, Arnulf repelled the Vikings and essentially ended their attacks on that front. The "Annales Fuldenses" report that there were so many dead Northmen that their bodies blocked the run of the river. After this victory Arnulf built a castle on an island in the Dijle river. Intervention in West Francia. Arnulf took advantage of the problems in West Francia after the death of Charles the Fat to secure the territory of Lotharingia, which he converted into a kingdom for his son Zwentibold. In 889 Arnulf supported the claim of Louis the Blind to the kingdom of Provence, after receiving a personal appeal from Louis' mother, Ermengard, who came to see Arnulf at Forchheim in May 889. Recognising the superiority of Arnulf's position, in 888 king Odo of France formally accepted the suzerainty of Arnulf. In 893 Arnulf switched his support from Odo to Charles the Simple after being persuaded by Fulk, Archbishop of Reims, that it was in his best interests. Arnulf then took advantage of the following fighting between Odo and Charles in 894, harrying some territories of West Francia. At one point, Charles the Simple was forced to flee to Arnulf and ask for his protection. His intervention soon forced Pope Formosus to get involved, as he was worried that a divided and war weary West Francia would be easy prey for the Vikings. In 895 Arnulf summoned both Charles and Odo to his residence at Worms. Charles's advisers convinced him not to go, and he sent a representative in his place. Odo, on the other hand, personally attended, together with a large retinue, bearing many gifts for Arnulf. Angered by the non-appearance of Charles, he welcomed Odo at the Diet of Worms in May 895 and again supported Odo's claim to the throne of West Francia. In the same assembly he crowned his illegitimate son Zwentibold as the king of Lotharingia. Wars with Moravia. As early as 880 Arnulf had designs on Great Moravia and had the Frankish bishop Wiching of Nitra interfere with the missionary activities of the Eastern Orthodox priest Methodius, with the aim of preventing any potential for creating a unified Moravian state. Arnulf had formal relations with the ruler of the Moravian Kingdom, Svatopluk, using them to learn the latter's military and political secrets. Later, these tactics were used to occupy the territory of the Greater Moravian state. Arnulf failed to conquer the whole of Great Moravia in wars of 892, 893, and 899. Yet Arnulf did achieve some successes, in particular in 895, when the Duchy of Bohemia broke away from Great Moravia and became his vassal state. An accord was reached between him and Duke of Bohemia Borivoj I. Bohemia was thus freed from the dangers of Frankish invasion. In 893 or 894 Great Moravia probably lost a part of its territory — present-day western Hungary — to him. As a reward, Wiching became Arnulf's chancellor in 892. In his attempts to conquer Moravia, in 899 Arnulf reached out to Magyars who had settled in the Carpathian Basin, and with their help he imposed a measure of control over Moravia. King of Italy and Holy Roman Emperor. In Italy Guy III of Spoleto and Berengar of Friuli fought over the Iron Crown of Lombardy. Berengar had been crowned king in 887, but Guy was then crowned in 889. While Pope Stephen V supported Guy, even crowning him Roman Emperor in 891, Arnulf threw his support behind Berengar. In 893 the new Pope Formosus, not trusting the newly crowned co-emperors Guy and his son Lambert, sent an embassy to Omuntesberch, where Arnulf was meeting with Svatopluk, to request that Arnulf come and liberate Italy, where he would be crowned emperor in Rome. Arnulf met the "Primores" of the Kingdom of Italy, dismissed them with gifts and promised to assist the pope. Arnulf then sent Zwentibold with a Bavarian army to join Berengar. They defeated Guy but were bought off and left in autumn. When Pope Formosus again asked Arnulf to invade, the duke personally led an army across the Alps early in 894. In January 894 Bergamo fell, and Count Ambrose, Guy's representative in the city, was hung from a tree by the city's gates. Conquering all of the territory north of the Po River, Arnulf forced the surrender of Milan and then drove Guy out of Pavia, where he was crowned King of Italy. Arnulf went no further before Guy died suddenly in late autumn, and a fever incapacitated his troops. His march northward through the Alps was interrupted by Rudolph I of Burgundy, and it was only with great difficulty that Arnulf crossed the mountain range. In retaliation, Arnulf ordered Zwentibold to ravage Rudolph's kingdom. In the meantime, Lambert and his mother Ageltrude travelled to Rome to receive papal confirmation of his imperial succession, but when Pope Formosus, still desiring to crown Arnulf, refused, he was imprisoned in Castel Sant'Angelo. In September 895 a new papal embassy arrived in Regensburg beseeching Arnulf's aid. In October Arnulf undertook his second campaign into Italy. He crossed the Alps quickly and again took Pavia, but then he continued slowly, garnering support among the nobility of Tuscany. Maginulf, Count of Milan, and Walfred of Friuli joined him. Eventually even Adalbert II of Tuscany abandoned Lambert. Finding Rome locked against him and held by Ageltrude, Arnulf had to take the city by force on 21 February, 896, freeing the pope. Arnulf was then greeted at the Ponte Milvio by the Roman Senate who escorted him into the Leonine City, where he was received by Pope Formosus on the steps of the Santi Apostoli. On 22 February 896 Formosus led the king into the church of St. Peter, anointed and crowned him as emperor, and saluted him as "Augustus". Arnulf then proceeded to the Basilica of Saint Paul Outside the Walls, where he received the homage of the Roman people, who swore "never to hand over the city to Lambert or his mother Ageltrude". Arnulf then proceeded to exile to Bavaria two leading senators, Constantine and Stephen, who had helped Ageltrude to seize Rome. Leaving one of his vassals, Farold, to hold Rome, two weeks later Arnulf marched on Spoleto, where Ageltrude had fled to join Lambert.However at this point, Arnulf had a stroke, forcing him to call off the campaign and return to Bavaria. Rumours of the time made Arnulf's condition to be a result of poisoning at the hand of Ageltrude. Arnulf retained power in Italy only as long as he was personally there. On his way north, he stopped at Pavia where he crowned his illegitimate son Ratold as sub-king of Italy, after which he left Ratold in Milan in an attempt to preserve his hold on Italy. That same year Pope Formosus died, leaving Lambert once again in power, and both he and Berengar proceeded to kill any officials who had been appointed by Arnulf, forcing Ratold to flee from Milan to Bavaria. For the rest of his life Arnulf exercised very little control in Italy, and his agents in Rome did not prevent the accession of Pope Stephen VI in 896. The pope initially gave his support to Arnulf but eventually became a supporter of Lambert. Final years. In addition to aftereffects from the stroke, Arnulf contracted morbus pediculosis (infestation of pubic lice on his eyelid), which prevented him from effectively dealing with the problems besetting his reign. Italy was lost, raiders from Moravia and Magyars were continually harassing his lands, and Lotharingia was in revolt against Zwentibold. He was also plagued by escalating violence and power struggles among the lower Frankish nobility. On December 8, 899, Arnulf died at Ratisbon in present-day Bavaria. He is entombed in St. Emmeram's Basilica at Regensburg, which is now known as Schloss Thurn und Taxis, the palace of the Princes of Thurn und Taxis. He was succeeded as the king of East Francia by his only legitimate son from Ota, Louis the Child. After Louis' death in 911 at age 17 or 18, the East Frankish branch of the Carolingian dynasty ceased to exist. Arnulf had had the nobility recognize the rights of his illegitimate sons, Zwentibold and Ratold, as his successors. Zwentibold continued to rule Lotharingia until his murder in 900.
2511
Alexanderplatz
() () is a large public square and transport hub in the central Mitte district of Berlin. The square is named after the Russian Tsar Alexander I, which also denotes the larger neighbourhood stretching from in the north-east to and the in the south-west. is reputedly the most visited area of Berlin, beating Friedrichstrasse and City West. It is a popular starting point for tourists, with many attractions including the (TV tower), the Nikolai Quarter and the ('Red City Hall') situated nearby. is still one of Berlin's major commercial areas, housing various shopping malls, department stores and other large retail locations. History. Early history to the 18th century. A hospital stood at the location of present-day since the 13th century. Named (St. George), the hospital gave its name to the nearby (George Gate) of the Berlin city wall. Outside the city walls, this area was largely undeveloped until around 1400, when the first settlers began building thatched cottages. As a gallows was located close by, the area earned the nickname the ('Devil's Pleasure Garden'). The George Gate became the most important of Berlin's city gates during the 16th century, being the main entry point for goods arriving along the roads to the north and north-east of the city, for example from , and , and the big Hanseatic cities on the Baltic Sea. After the Thirty Years' War, the city wall was strengthened. From 1658 to 1683, a citywide fortress was constructed to plans by the Linz master builder, . The new fortress contained 13 bastions connected by ramparts and was preceded by a moat measuring up to wide. Within the new fortress, many of the historic city wall gates were closed. For example, the southeastern Gate was closed but the Georgian Gate remained open, making the Georgian Gate an even more important entrance to the city. In 1681, the trade of cattle and pig fattening was banned within the city. Frederick William, the Great Elector, granted cheaper plots of land, waiving the basic interest rate, in the area in front of the Georgian Gate. Settlements grew rapidly and a weekly cattle market was established on the square in front of the Gate. The area developed into a suburb – the – which continued to flourish into the late 17th century. Unlike the southwestern suburbs (, ) which were strictly and geometrically planned, the suburbs in the northeast (, and the ) proliferated without plan. Despite a building ban imposed in 1691, more than 600 houses existed in the area by 1700. At that time, the George Gate was a rectangular gatehouse with a tower. Next to the tower stood a remaining tower from the original medieval city walls. The upper floors of the gatehouse served as the city jail. A drawbridge spanned the moat and the gate was locked at nightfall by the garrison using heavy oak planks. A highway ran through the cattle market to the northeast towards . To the right stood the George chapel, an orphanage and a hospital that was donated by the Elector Sophie Dorothea in 1672. Next to the chapel stood a dilapidated medieval plague house which was demolished in 1716. Behind it was a rifleman's field and an inn, later named the . By the end of the 17th century, 600 to 700 families lived in this area. They included butchers, cattle herders, shepherds and dairy farmers. The George chapel was upgraded to the George church and received its own preacher. (1701–1805). After his coronation in on 6 May 1701 the Prussian King Frederick I entered Berlin through the George Gate. This led to the gate being renamed the King's Gate, and the surrounding area became known in official documents as (King's Gate Square). The suburb was renamed (or 'royal suburbs' short). In 1734, the Berlin Customs Wall, which initially consisted of a ring of palisade fences, was reinforced and grew to encompass the old city and its suburbs, including . This resulted in the King's Gate losing importance as an entry point for goods into the city. The gate was finally demolished in 1746. By the end of the 18th century, the basic structure of the royal suburbs of the had been developed. It consisted of irregular-shaped blocks of buildings running along the historic highways which once carried goods in various directions out of the gate. At this time, the area contained large factories (silk and wool), such as the (one of Berlin's first cloth factories, located in a former barn) and a workhouse established in 1758 for beggars and homeless people, where the inmates worked a man-powered treadmill to turn a mill. Soon, military facilities came to dominate the area, such as the 1799–1800 military parade grounds designed by David Gilly. At this time, the residents of the were mostly craftsmen, petty-bourgeois, retired soldiers and manufacturing workers. The southern part of the later was separated from traffic by trees and served as a parade ground, whereas the northern half remained a market. Beginning in the mid-18th century, the most important wool market in Germany was held in . Between 1752 and 1755, the writer lived in a house on Alexanderplatz. In 1771, a new stone bridge (the ) was built over the moat and in 1777 a colonnade-lined row of shops () was constructed by architect . Between 1783 and 1784, seven three-storey buildings were erected around the square by , including the famous , where lived as a permanent tenant and stayed in the days before his suicide. (1805–1900). On 25 October 1805 the Russian Tsar Alexander I was welcomed to the city on the parade grounds in front of the old King's Gate. To mark this occasion, on 2 November, King Frederick William III ordered the square to be renamed : In the southeast of the square, the cloth factory buildings were converted into the Theater by at a cost of 120,000 Taler. The foundation stone was laid on 31 August 1823 and the opening ceremony occurred on 4 August 1824. Sales were poor, forcing the theatre to close on 3 June 1851. Thereafter, the building was used for wool storage, then as a tenement building, and finally as an inn called until the building's demolition in 1932. During these years, was populated by fish wives, water carriers, sand sellers, rag-and-bone men, knife sharpeners and day laborers. Because of its importance as a transport hub, horse-drawn buses ran every 15 minutes between and in 1847. During the March Revolution of 1848, large-scale street fighting occurred on the streets of , where revolutionaries used barricades to block the route from to the city. Novelist and poet , who worked in the vicinity in a nearby pharmacy, participated in the construction of barricades and later described how he used materials from the Theater to barricade . The continued to grow throughout the 19th century, with three-storey developments already existing at the beginning of the century and fourth storeys being constructed from the middle of the century. By the end of the century, most of the buildings were already five storeys high. The large factories and military facilities gave way to housing developments (mainly rental housing for the factory workers who had just moved into the city) and trading houses. At the beginning of the 1870s, the Berlin administration had the former moat filled to build the Berlin city railway, which was opened in 1882 along with (' Railway Station'). In 1883–1884, the Grand Hotel, a neo-Renaissance building with 185 rooms and shops beneath was constructed. From 1886 to 1890, built the police headquarters, a huge brick building whose tower on the northern corner dominated the building. In 1890, a district court at was also established. In 1886, the local authorities built a central market hall west of the rail tracks, which replaced the weekly market on the in 1896. During the end of the 19th century, the emerging private traffic and the first horse bus lines dominated the northern part of the square, the southern part (the former parade ground) remained quiet, having green space elements added by garden director in 1889. The northwest of the square contained a second, smaller green space where, in 1895, the copper Berolina statue by sculptor was erected. Between Empire and the Nazi era (1900–1940). At the beginning of the 20th century, experienced its heyday. In 1901, founded the first German cabaret, the , in the former ('Secession stage') at , initially under the name . It was announced as " as upscale entertainment with artistic ambitions. Emperor-loyal and market-oriented stands the uncritical amusement in the foreground." The merchants , and opened large department stores on : (1904–1911), (1910–1911) and (1911). marketed itself as a department store for the Berlin people, whereas modelled itself as a department store for the world. In October 1905, the first section of the department store opened to the public. It was designed by architects and, who had already won second prize in the competition for the construction of the building. The department store underwent further construction phases and, in 1911, had a commercial space of and the longest department store façade in the world at in length. For the construction of the department store, by architects and , the were removed in 1910 and now stand in the Park in . In October 1908, the ('house of teachers') was opened next to the at . It was designed by and Henry Gross. The building belonged to the ('teachers’ association'), who rented space on the ground floor of the building out to a pastry shop and restaurant to raise funds for the association. The building housed the teachers' library which survived two world wars, and today is integrated into the library for educational historical research. The rear of the property contained the association's administrative building, a hotel for members and an exhibition hall. Notable events that took place in the hall include the funeral services for and on 2 February 1919 and, on 4 December 1920, the (Unification Party Congress) of the Communist Party and the USPD. The First Ordinary Congress of the Communist Workers Party of Germany was held in the nearby restaurant, 1–4 August 1920. 's position as a main transport and traffic hub continued to fuel its development. In addition to the three underground lines, long-distance trains and trains ran along the 's viaduct arches. Omnibuses, horse-drawn from 1877 and, after 1898, also electric-powered trams, ran out of in all directions in a star shape. The subway station was designed by Alfred Grenander and followed the colour-coded order of subway stations, which began with green at and ran through to dark red. In the Golden Twenties, was the epitome of the lively, pulsating cosmopolitan city of Berlin, rivalled in the city only by . Many of the buildings and rail bridges surrounding the platz bore large billboards that illuminated the night. The Berlin cigarette company Manoli had a famous billboard at the time which contained a ring of neon tubes that constantly circled a black ball. The proverbial " of those years was characterized as ". Writer wrote a poem referencing the advert, and the composer Rudolf Nelson made the legendary with the dancer Lucie Berber. The writer named his novel, , after the square, and filmed parts of his 1927 film ("Berlin: The Symphony of the Big City") at . Destruction of (1940–1945). One of Berlin's largest air-raid shelters during the Second World War was situated under . It was built between 1941 and 1943 for the by . The war reached in early April 1945. The Berolina statue had already been removed in 1944 and probably melted down for use in arms production. During the Battle of Berlin, Red Army artillery bombarded the area around . The battles of the last days of the war destroyed considerable parts of the historic , as well as many of the buildings around . The had entrenched itself within the tunnels of the underground system. Hours before fighting ended in Berlin on 2 May 1945, troops of the SS detonated explosives inside the north–south tunnel under the Canal to slow the advance of the Red Army towards Berlin's city centre. The entire tunnel flooded, as well as large sections of the network via connecting passages at the underground station. Many of those seeking shelter in the tunnels were killed. Of the then of subway tunnel, around were flooded with more than one million cubic meters () of water. Demolition and reconstruction (1945–1964). Before a planned reconstruction of the entire could take place, all the war ruins needed to be demolished and cleared away. A popular black market emerged within the ruined area, which the police raided several times a day. One structure demolished after World War II was the 'Rote Burg', a red brick building with round arches, previously used as police and Gestapo headquarters. The huge construction project began in 1886 and was completed in 1890; it was one of Berlin's largest buildings. The 'castle' suffered extensive damage during 1944-45 and was demolished in 1957. The site on the southwest corner of Alexanderplatz remained largely unused as a carpark until the Alexa shopping centre opened in 2007. Reconstruction planning for post-war Berlin gave priority to the dedicated space to accommodate the rapidly growing motor traffic in inner-city thoroughfares. This idea of a traffic-orientated city was already based on considerations and plans by and from the 1930s. East Germany. has been subject to redevelopment several times in its history, most recently during the 1960s, when it was turned into a pedestrian zone and enlarged as part of the German Democratic Republic's redevelopment of the city centre. It is surrounded by several notable structures including the (TV Tower). During the Peaceful Revolution of 1989, the demonstration on 4 November 1989 was the largest demonstration in the history of the German Democratic Republic. Protests starting 15 October and peaked on 4 November with an estimated 200,000 participants who called on the government of the ruling Socialist Unity Party of Germany to step down and demanded a free press, the opening of the borders and their right to travel. Speakers were , , , , , and . The protests continued and culminated in the unexpected Fall of the Berlin Wall on 9 November 1989. After German reunification (1989). Ever since German reunification, has undergone a gradual process of change with many of the surrounding buildings being renovated. After the political turnaround in the wake of the fall of the Berlin Wall, socialist urban planning and architecture of the 1970s no longer corresponded to the current ideas of an inner-city square. Investors demanded planning security for their construction projects. After initial discussions with the public, the goal quickly arose to reinstate 's tram network for better connections to surrounding city quarters. In 1993, an urban planning ideas competition for architects took place to redesign the square and its surrounding area. In the first phase, there were 16 submissions, five of which were selected for the second phase of the competition. These five architects had to adapt their plans to detailed requirements. For example, the return of the Alex's trams was planned, with the implementation to be made in several stages. The winner, who was determined on 17 September 1993, was the Berlin architect . 's plan was based on Behrens’ design, provided a horseshoe-shaped area of seven- to eight-storey buildings and high towers with 42 floors. The and the – both listed buildings – would form the southwestern boundary. Second place went to the design by and . The proposal of the architecture firm Kny & Weber, which was strongly based on the horseshoe shape of Wagner, finally won the third place. The design by was chosen on 7 June 1994 by the Berlin Senate as a basis for the further transformation of . In 1993, architect 's master plan for a major redevelopment including the construction of several skyscrapers was published. In 1995, completed the renovation of the . In 1998, the first tram returned to , and in 1999, the town planning contracts for the implementation of and 's plans were signed by the landowners and the investors. 21st century. On 2 April 2000, the Senate finally fixed the development plan for . The purchase contracts between investors and the Senate Department for Urban Development were signed on 23 May 2002, thus laying the foundations for the development. The CUBIX multiplex cinema (CineStar Cubix am Alexanderplatz, styled CUBIX), which opened in November 2000, joined the team of Berlin International Film Festival cinemas in 2007, and the festival shows films on three of its screens. Renovation of the department store began in 2004, led by Berlin professor of architecture, and his son . The building was enlarged by about and has since operated under the name . Beginning with the reconstruction of the department store in 2004, and the biggest underground railway station of Berlin, some buildings were redesigned and new structures built on the square's south-eastern side. Sidewalks were expanded to shrink one of the avenues, a new underground garage was built, and commuter tunnels meant to keep pedestrians off the streets were removed. Between 2005 and 2006, was renovated and later became a branch of the clothing chain, C&A. In 2005, the began work to extend the tram line from to (Alex II). This route was originally to be opened in 2000 but was postponed several times. After further delays caused by the 2006 FIFA World Cup, the route opened on 30 May 2007. In February 2006, the redesign of the walk-in plaza began. The redevelopment plans were provided by the architecture firm Gerkan, Marg and Partners and the Hamburg-based company . The final plans emerged from a design competition launched by the state of Berlin in 2004. However, the paving work was temporarily interrupted a few months after the start of construction by the 2006 FIFA World Cup and all excavation pits had to be provisionally asphalted over. The construction work could only be completed at the end of 2007. The renovation of , the largest Berlin underground station, had been ongoing since the mid-1990s and was finally completed in October 2008. The was given a pavement of yellow granite, bordered by grey mosaic paving around the buildings. Wall AG modernized the 1920s-era underground toilets at a cost of 750,000 euros. The total redesign cost amounted to around 8.7 million euros. On 12 September 2007 the Alexa shopping centre opened. It is located in the immediate vicinity of the , on the site of the old Berlin police headquarters. With a sales area, it is one of the largest shopping centres in Berlin. In May 2007, the Texas property development company Hines began building a six-story commercial building named . The building was built on a plot of , which, according to the plans, closes the square to the east and thus reduces the area of the Platz. The building was opened on 25 March 2009. At the beginning of 2007, the construction company created an underground garage with three levels below the , located between the hotel tower and the building, which cost 25 million euros to build and provides space for around 700 cars. The opening took place on 26 November 2010. At the same time, the Senate narrowed from almost wide to wide (), thus reducing it to three lanes in each direction. Behind the station, next to the CUBIX cinema in the immediate vicinity of the TV tower, the high residential and commercial building, Alea 101, was built between 2012 and 2014. As of 2014 it as assessed that due to a lack of demand the skyscrapers planned in 1993 were unlikely to be constructed. In January 2014, a 39-story residential tower designed by Frank Gehry was announced, but this project was put on hold in 2018. The area is the largest area for crime in Berlin. As of October 2017, was classified a ("crime-contaminated location") by the (General Safety and Planning Laws). Today and future plans. Despite the reconstruction of the tram line crossing, it has retained its socialist character, including the much-graffitied , a popular venue. is reputedly the most visited area of Berlin, beating Friedrichstrasse and City West. It is a popular starting point for tourists, with many attractions including the (TV tower), the Nikolai Quarter and the ('Red City Hall') situated nearby. is still one of Berlin's major commercial areas, housing various shopping malls, department stores and other large retail locations. Many historic buildings are located in the vicinity of . The traditional seat of city government, the , or 'Red City Hall', is located nearby, as was the former East German parliament building, the . The was demolished from 2006–2008 to make room for a full reconstruction of the Baroque Berlin Palace, or , which is set to open in 2019. is also the name of the S-Bahn and U-Bahn stations there. It is one of Berlin's largest and most important transportation hubs, being a meeting place of three subway () lines, three lines, and many tram and bus lines, as well as regional trains. It also accommodates the Park Inn Berlin and the World Time Clock, a continually rotating installation that shows the time throughout the globe, the House of Travel, and 's (House of Teachers)'. Long-term plans exist for the demolition of the high former (now the Hotel Park-Inn), with the site to be replaced by three skyscrapers. If and when this plan will be implemented is unclear, especially since the hotel tower received a new façade as recently as in 2005, and the occupancy rates of the hotel are very good. However, the plans could give way in the next few years to a suggested high new block conversion. The previous main tenant of the development, Saturn, moved into the building in March 2009. In 2014, Primark opened a branch inside the hotel building. The majority of the planned high skyscrapers will probably never be built. The state of Berlin has announced that it will not enforce the corresponding urban development contracts against the market. Of the 13 planned skyscrapers, 10 remained as of 2008, after modifications to the plans – eight of which had construction rights. Some investors in the Alexa shopping centre announced several times since 2007 that they would sell their respective shares in the plot to an investor interested in building a high-rise building. The first concrete plans for the construction of a high-rise were made by Hines, the investor behind die mitte. In 2009, the construction of a high tower to be built behind die mitte was announced. On 12 September 2011, a slightly modified development plan was presented, which provided for a residential tower housing 400 apartments. In early 2013, the development plan was opened to the public. In autumn 2015, the Berlin Senate organized two forums in which interested citizens could express their opinions on the proposed changes to the . Architects, city planners and Senate officials held open discussions. On that occasion, however, it was reiterated that the plans for high-rise developments were not up for debate. According to the master plan of the architect , up to eleven huge buildings will continue to be built, which will house a mixture of shops and apartments. At the beginning of March 2018, it was announced that the district office had granted planning permission for the first residential high-rise in , the high Alexander Tower. On 29 of the 35 floors, 377 apartments are to be built. It would be located next to the Alexa shopping centre, with a planned completion date of 2021. Roads and public transport. During the post-war reconstruction of the 1960s, was completely pedestrianized. Since then, trams were reintroduced to the area in 1998. station provides connections, access to the U2, U5 and U8 subway lines, regional train lines for DB Regio and ODEG services and, on weekends, the (HBX). Several tram and bus lines also service the area. The following main roads connect to : Several arterial roads lead radially from to the outskirts of Berlin. These include (clockwise from north to south-east): (B 1 and B 5) – – / – (B 1 and B 5 to junction at ) Structures. Fountain of Friendship. The Fountain of Friendship () was erected in 1970 during the redesign of and inaugurated on October 7. It was created by and his group of artists. Its water basin has a diameter of 23 meters, it is 6.20 meters high and is built from embossed copper, glass, ceramics and enamel. The water spurts from the highest point and then flows down in spirals over 17 shells, which each have a diameter between one and four meters. After German reunification, it was completely renovated in a metal art workshop during the reconstruction of the . Other. Apart from , is the only existing square in front of one of the medieval gates of Berlin's city wall.
2512
Asian Development Bank
The Asian Development Bank (ADB) is a regional development bank established on 19 December 1966, which is headquartered in the Ortigas Center located in the city of Mandaluyong, Metro Manila, Philippines. The bank also maintains 31 field offices around the world to promote social and economic development in Asia. The bank admits the members of the UN Economic and Social Commission for Asia and the Pacific (UNESCAP, formerly the Economic Commission for Asia and the Far East or ECAFE) and non-regional developed countries. From 31 members at its establishment, ADB now has 68 members. The ADB was modeled closely on the World Bank, and has a similar weighted voting system where votes are distributed in proportion with members' capital subscriptions. ADB releases an annual report that summarizes its operations, budget and other materials for review by the public. The ADB-Japan Scholarship Program (ADB-JSP) enrolls about 300 students annually in academic institutions located in 10 countries within the Region. Upon completion of their study programs, scholars are expected to contribute to the economic and social development of their home countries. ADB is an official United Nations Observer. As of 31 December 2020, Japan and the United States each holds the largest proportion of shares at 15.571%. China holds 6.429%, India holds 6.317%, and Australia holds 5.773%. Organization. The highest policy-making body of the bank is the Board of Governors, composed of one representative from each member state. The Board of Governors, in turn, elect among themselves the twelve members of the board of directors and their deputies. Eight of the twelve members come from regional (Asia-Pacific) members while the others come from non-regional members. The Board of Governors also elect the bank's president, who is the chairperson of the board of directors and manages ADB. The president has a term of office lasting five years, and may be re-elected. Traditionally, and because Japan is one of the largest shareholders of the bank, the president has always been Japanese. The current president is Masatsugu Asakawa. He succeeded Takehiko Nakao on 17 January 2020, who succeeded Haruhiko Kuroda in 2013. The headquarters of the bank is at 6 ADB Avenue, Mandaluyong, Metro Manila, Philippines, and it has 42 field offices in Asia and the Pacific and representative offices in Washington, Frankfurt, Tokyo and Sydney. The bank employs about 3,000 people, representing 60 of its 68 members. History. 1960s. As early as 1956, Japan Finance Minister Hisato Ichimada had suggested to United States Secretary of State John Foster Dulles that development projects in Southeast Asia could be supported by a new financial institution for the region. A year later, Japanese Prime Minister Nobusuke Kishi announced that Japan intended to sponsor the establishment of a regional development fund with resources largely from Japan and other industrial countries. But the US did not warm to the plan and the concept was shelved. See full account in "Banking the Future of Asia and the Pacific: 50 Years of the Asian Development Bank," July 2017. The idea came up again late in 1962 when Kaoru Ohashi, an economist from a research institute in Tokyo, visited Takeshi Watanabe, then a private financial consultant in Tokyo, and proposed a study group to form a development bank for the Asian region. The group met regularly in 1963, examining various scenarios for setting up a new institution and drew on Watanabe's experiences with the World Bank. However, the idea received a cool reception from the World Bank itself and the study group became discouraged. In parallel, the concept was formally proposed at a trade conference organized by the Economic Commission for Asia and the Far East (ECAFE) in 1963 by a young Thai economist, Paul Sithi-Amnuai. (ESCAP, United Nations Publication March 2007, "The first parliament of Asia" pp. 65). Despite an initial mixed reaction, support for the establishment of a new bank soon grew. An expert group was convened to study the idea, with Japan invited to contribute to the group. When Watanabe was recommended, the two streams proposing a new bank—from ECAFE and Japan—came together. Initially, the US was on the fence, not opposing the idea but not ready to commit financial support. But a new bank for Asia was soon seen to fit in with a broader program of assistance to Asia planned by United States President Lyndon B. Johnson in the wake of the escalating U.S. military support for the government of South Vietnam. As a key player in the concept, Japan hoped that the ADB offices would be in Tokyo. However, eight other cities had also expressed an interest: Bangkok, Colombo, Kabul, Kuala Lumpur, Manila, Phnom Penh, Singapore, and Tehran. To decide, the 18 prospective regional members of the new bank held three rounds of votes at a ministerial conference in Manila in November/December 1965. In the first round on 30 November, Tokyo failed to win a majority, so a second ballot was held the next day at noon. Although Japan was in the lead, it was still inconclusive, so a final vote was held after lunch. In the third poll, Tokyo gained eight votes to Manila's nine, with one abstention. Therefore, Manila was declared the host of the new development bank; the Japanese were mystified and deeply disappointed. Watanabe later wrote in his personal history of ADB: "I felt as if the child I had so carefully reared had been taken away to a distant country." (Asian Development Bank publication, "Towards a New Asia", 1977, p. 16) As intensive work took place during 1966 to prepare for the opening of the new bank in Manila, high on the agenda was choice of president. Japanese Prime Minister Eisaku Satō asked Watanabe to be a candidate. Although he initially declined, pressure came from other countries and Watanabe agreed. In the absence of any other candidates, Watanabe was elected first President of the Asian Development Bank at its Inaugural Meeting on 24 November 1966. By the end of 1972, Japan had contributed $173.7 million (22.6% of the total) to the ordinary capital resources and $122.6 million (59.6% of the total) to the special funds. In contrast, the United States contributed only $1.25 million to the special fund. After its creation in the 1960s, ADB focused much of its assistance on food production and rural development. At the time, Asia was one of the poorest regions in the world. Early loans went largely to Indonesia, Thailand, Malaysia, South Korea and the Philippines; these countries accounted for 78.48% of the total ADB loans between 1967 and 1972. Moreover, Japan received tangible benefits, 41.67% of the total procurements between 1967 and 1976. Japan tied its special funds contributions to its preferred sectors and regions and procurements of its goods and services, as reflected in its $100 million donation for the Agricultural Special Fund in April 1968. Watanabe served as the first ADB president to 1972. 1970s–1980s. In the 1970s, ADB's assistance to developing countries in Asia expanded into education and health, and then to infrastructure and industry. The gradual emergence of Asian economies in the latter part of the decade spurred demand for better infrastructure to support economic growth. ADB focused on improving roads and providing electricity. When the world suffered its first oil price shock, ADB shifted more of its assistance to support energy projects, especially those promoting the development of domestic energy sources in member countries. Following considerable pressure from the Reagan Administration in the 1980s, ADB reluctantly began working with the private sector in an attempt to increase the impact of its development assistance to poor countries in Asia and the Pacific. In the wake of the second oil crisis, ADB expanded its assistance to energy projects. In 1982, ADB opened its first field office, in Bangladesh, and later in the decade, it expanded its work with non-government organizations (NGOs). Japanese presidents Inoue Shiro (1972–76) and Yoshida Taroichi (1976–81) took the spotlight in the 1970s. Fujioka Masao, the fourth president (1981–90), adopted an assertive leadership style, launching an ambitious plan to expand the ADB into a high-impact development agency. On November 18, 1972, the Bank inaugurated its headquarters along Roxas Boulevard in Pasay City, Philippines. In the early 1990s, ADB moved its offices to Ortigas Center in Pasig City, with the Department of Foreign Affairs (Philippines) taking over its old Pasay premises. 1990s. In the 1990s, ADB began promoting regional cooperation by helping the countries on the Mekong River to trade and work together. The decade also saw an expansion of ADB's membership with the addition of several Central Asian countries following the end of the Cold War. In mid-1997, ADB responded to the financial crisis that hit the region with projects designed to strengthen financial sectors and create social safety nets for the poor. During the crisis, ADB approved its largest single loan – a $4 billion emergency loan to South Korea. In 1999, ADB adopted poverty reduction as its overarching goal. 2000s. The early 2000s saw a dramatic expansion of private sector finance. While the institution had such operations since the 1980s (under pressure from the Reagan Administration) the early attempts were highly unsuccessful with low lending volumes, considerable losses and financial scandals associated with an entity named AFIC. However, beginning in 2002, the ADB undertook a dramatic expansion of private sector lending under a new team. Over the course of the next six years, the Private Sector Operations Department (PSOD) grew by a factor of 41 times the 2001 levels of new financings and earnings for the ADB. This culminated with the Board's formal recognition of these achievements in March 2008, when the Board of Directors formally adopted the Long Term Strategic Framework (LTSF). That document formally stated that assistance to private sector development was the lead priority of the ADB and that it should constitute 50% of the bank's lending by 2020. In 2003, the severe acute respiratory syndrome (SARS) epidemic hit the region and ADB responded with programs to help the countries in the region work together to address infectious diseases, including avian influenza and HIV/AIDS. ADB also responded to a multitude of natural disasters in the region, committing more than $850 million for recovery in areas of India, Indonesia, Maldives, and Sri Lanka which were impacted by the 2004 Indian Ocean earthquake and tsunami. In addition, $1 billion in loans and grants was provided to the victims of the October 2005 earthquake in Pakistan. In December 2005, China donated $20 million to the ADB for a regional poverty alleviation fund; China's first such fund set up at an international institution. In 2009, ADB's Board of Governors agreed to triple ADB's capital base from $55 billion to $165 billion, giving it much-needed resources to respond to the global economic crisis. The 200% increase is the largest in ADB's history, and was the first since 1994. 2010s. Asia moved beyond the economic crisis and by 2010 had emerged as a new engine of global economic growth though it remained home to two-thirds of the world's poor. In addition, the increasing prosperity of many people in the region created a widening income gap that left many people behind. ADB responded to this with loans and grants that encouraged economic growth. In early 2012, the ADB began to re-engage with Myanmar in response to reforms initiated by the government. In April 2014, ADB opened an office in Myanmar and resumed making loans and grants to the country. In 2017, ADB combined the lending operations of its Asian Development Fund (ADF) with its ordinary capital resources (OCR). The result was to expand the OCR balance sheet to permit increasing annual lending and grants to $20 billion by 2020 — 50% more than the previous level. In 2020, ADB gave a $2 million grant from the Asia Pacific Disaster Response Fund, to support the Armenian government in the fight against the spread of COVID-19 pandemic. In the same year, the ADB committed a $20 million loan to Electric Networks of Armenia, that will ensure electricity for the citizens during the pandemic, as well as approved $500,000 in regional technical assistance to procure personal protective equipment and other medical supplies. Objectives and activities. Aim. The ADB defines itself as a social development organization that is dedicated to reducing poverty in Asia and the Pacific through inclusive economic growth, environmentally sustainable growth, and regional integration. This is carried out through investments – in the form of loans, grants and information sharing – in infrastructure, health care services, financial and public administration systems, helping countries prepare for the impact of climate change or better manage their natural resources, as well as other areas. Focus areas. Eighty percent of ADB's lending is concentrated public sector lending in five operational areas. Financings. The ADB offers "hard" loans on commercial terms primarily to middle income countries in Asia and "soft" loans with lower interest rates to poorer countries in the region. Based on a new policy, both types of loans will be sourced starting January 2017 from the bank's ordinary capital resources (OCR), which functions as its general operational fund. The ADB's Private Sector Department (PSOD) can and does offer a broader range of financings beyond commercial loans. They also have the capability to provide guarantees, equity and mezzanine finance (a combination of debt and equity). In 2017, ADB lent $19.1 billion of which $3.2 billion went to private enterprises, as part of its "nonsovereign" operations. ADB's operations in 2017, including grants and cofinancing, totaled $28.9 billion. ADB obtains its funding by issuing bonds on the world's capital markets. It also relies on the contributions of member countries, retained earnings from lending operations, and the repayment of loans. Private sector investments. ADB provides direct financial assistance, in the form of debt, equity and mezzanine finance to private sector companies, for projects that have clear social benefits beyond the financial rate of return. ADB's participation is usually limited but it leverages a large amount of funds from commercial sources to finance these projects by holding no more than 25% of any given transaction. Cofinancing. ADB partners with other development organizations on some projects to increase the amount of funding available. In 2014, $9.2 billion—or nearly half—of ADB's $22.9 billion in operations were financed by other organizations. According to Jason Rush, Principal Communication Specialist, the Bank communicates with many other multilateral organizations. Funds and resources. More than 50 financing partnership facilities, trust funds, and other funds – totalling several billion each year – are administered by ADB and put toward projects that promote social and economic development in Asia and the Pacific. ADB has raised Rs 5 billion or around Rs 500 crores from its issuance of 5-year offshore Indian rupee (INR) linked bonds. On 26 Feb 2020, ADB raises $118 million from rupee-linked bonds and supporting the development of India International Exchange in India, as it also contributes to an established yield curve which stretches from 2021 through 2030 with $1 billion of outstanding bonds. Access to information. ADB has an information disclosure policy that presumes all information that is produced by the institution should be disclosed to the public unless there is a specific reason to keep it confidential. The policy calls for accountability and transparency in operations and the timely response to requests for information and documents. ADB does not disclose information that jeopardizes personal privacy, safety and security, certain financial and commercial information, as well as other exceptions. Criticism. Since the ADB's early days, critics have charged that the two major donors, Japan and the United States, have had extensive influence over lending, policy and staffing decisions. Oxfam Australia has criticized the Asian Development Bank for insensitivity to local communities. "Operating at a global and international level, these banks can undermine people's human rights through projects that have detrimental outcomes for poor and marginalized communities." The bank also received criticism from the United Nations Environmental Program, stating in a report that "much of the growth has bypassed more than 70 percent of its rural population, many of whom are directly dependent on natural resources for livelihoods and incomes." There had been criticism that ADB's large scale projects cause social and environmental damage due to lack of oversight. One of the most controversial ADB-related projects is Thailand's Mae Moh coal-fired power station. Environmental and human rights activists say ADB's environmental safeguards policy as well as policies for indigenous peoples and involuntary resettlement, while usually up to international standards on paper, are often ignored in practice, are too vague or weak to be effective, or are simply not enforced by bank officials. The bank has been criticized over its role and relevance in the food crisis. The ADB has been accused by civil society of ignoring warnings leading up the crisis and also contributing to it by pushing loan conditions that many say unfairly pressure governments to deregulate and privatize agriculture, leading to problems such as the rice supply shortage in Southeast Asia. Indeed, whereas the Private Sector Operations Department (PSOD) closed out that year with financings of $2.4 billion, the ADB has significantly dropped below that level in the years since and is clearly not on the path to achieving its stated goal of 50% of financings to the private sector by 2020. Critics also point out that the PSOD is the only Department that actually makes money for the ADB. Hence, with the vast majority of loans going to concessionary (sub-market) loans to the public sector, the ADB is facing considerable financial difficulty and continuous operating losses. Countries with the largest subscribed capital and voting rights. The following table are amounts for 20 largest countries by subscribed capital and voting power at the Asian Development Bank as of December 2021. Members. ADB has 68 members (as of 23 March 2019): 49 members from the Asian and Pacific Region, 19 members from Other Regions. The year after a member's name indicates the year of membership. At the time a country ceases to be a member, the Bank shall arrange for the repurchase of such country's shares by the Bank as a part of the settlement of accounts with such country in accordance with the provisions of paragraphs 3 and 4 of Article 43.
2514
Aswan
Aswan (, ; ; ) is a city in Southern Egypt, and is the capital of the Aswan Governorate. Aswan is a busy market and tourist centre located just north of the Aswan Dam on the east bank of the Nile at the first cataract. The modern city has expanded and includes the formerly separate community on the island of Elephantine. Aswan includes five monuments within the UNESCO World Heritage Site of the Nubian Monuments from Abu Simbel to Philae (despite Aswan being neither Nubian, nor between Abu Simbel and Philae); these are the Old and Middle Kingdom tombs of Qubbet el-Hawa, the town of Elephantine, the stone quarries and Unfinished Obelisk, the and the . The city's Nubian Museum is an important archaeological center, containing finds from the International Campaign to Save the Monuments of Nubia prior to the Aswan Dam's flooding of all of Lower Nubia. The city is part of the UNESCO Creative Cities Network in the category of craft and folk art. Aswan joined the UNESCO Global Network of Learning Cities in 2017. Other spellings and variations. Aswan was formerly spelled Assuan or Assouan. Names in other languages include (; Ancient Egyptian: ; ; ; proposed Biblical Hebrew: סְוֵנֵה "Sǝwēnê"). The Nubians also call the city "Dib" which means "fortress, palace" and is derived from the Old Nubian name ⲇⲡ̅ⲡⲓ. History. Aswan is the ancient city of Swenett, later known as Syene, which in antiquity was the frontier town of Ancient Egypt facing the south. Swenett is supposed to have derived its name from an Egyptian goddess with the same name. This goddess later was identified as Eileithyia by the Greeks and Lucina by the Romans during their occupation of Ancient Egypt because of the similar association of their goddesses with childbirth, and of which the import is "the opener". The ancient name of the city also is said to be derived from the Egyptian symbol for "trade", or "market". Because the Ancient Egyptians oriented themselves toward the origin of the life-giving waters of the Nile in the south, and as Swenett was the southernmost town in the country, Egypt always was conceived to "open" or begin at Swenett. The city stood upon a peninsula on the right (east) bank of the Nile, immediately below (and north of) the first cataract of the flowing waters, which extended to it from Philae. Navigation to the delta was possible from this location without encountering a barrier. The stone quarries of ancient Egypt located here were celebrated for their stone, and especially for the granitic rock called syenite. They furnished the colossal statues, obelisks, and monolithic shrines that are found throughout Egypt, including the pyramids; and the traces of the quarrymen who worked in these 3,000 years ago are still visible in the native rock. They lie on either bank of the Nile, and a road, in length, was cut beside them from Syene to Philae. Swenett was equally important as a military station and for its position on a trade route. Under every dynasty it was a garrison town; and here tolls and customs were levied on all boats passing southwards and northwards. Around 330, the legion stationed here received a bishop from Alexandria; this later became the Coptic Diocese of Syene. The city is mentioned by numerous ancient writers, including Herodotus, Strabo, Stephanus of Byzantium, Ptolemy, Pliny the Elder, Vitruvius, and it appears on the Antonine Itinerary. It may also be mentioned in the Book of Ezekiel and the Book of Isaiah. The Nile is nearly wide above Aswan. From this frontier town to the northern extremity of Egypt, the river flows for more than without bar or cataract. The voyage from Aswan to Alexandria usually took 21 to 28 days in favorable weather. Archaeological findings. In April 2018, the Egyptian Ministry of Antiquities announced the discovery of the head of the bust of Roman Emperor Marcus Aurelius at the Temple of Kom Ombo during work to protect the site from groundwater. In September 2018, the Egyptian Antiquities Minister Khaled el-Enany announced that a sandstone sphinx statue had been discovered at the temple of Kom Ombo. The statue, measuring approximately in width and ) in height, likely dates to the Ptolemaic Dynasty. Archaeologists discovered 35 mummified remains of Egyptians in a tomb in Aswan in 2019. Italian archaeologist Patrizia Piacentini and El-Enany both reported that the tomb, where the remains of ancient men, women and children were found, dates back to the Greco-Roman period between 332 BC and 395 AD. While the findings assumed belonging to a mother and a child were well preserved, others had suffered major destruction. Other than the mummies, artifacts including painted funerary masks, vases of bitumen used in mummification, pottery and wooden figurines were revealed. Thanks to the hieroglyphics on the tomb, it was detected that the tomb belongs to a tradesman named Tjit. Piacentini commented “It's a very important discovery because we have added something to the history of Aswan that was missing. We knew about tombs and necropoli dating back to the second and third millennium, but we didn't know where the people who lived in the last part of the Pharaonic era were. Aswan, on the southern border of Egypt, was also a very important trading city”. Stan Hendrick, John Coleman Darnell and Maria Gatto in 2012 excavated petroglyphic engravings from Nag el-Hamdulab in Aswan which featured representations of a boat procession, solar symbolism and the earliest depiction of the White Crown with an estimated dating range between 3200BC and 3100BC. In February 2021, archaeologists from the Egyptian Ministry of Antiquities announced significant discoveries at an archaeological site called Shiha Fort in Aswan, namely a Ptolemaic period temple, a Roman fort, an early Coptic church and an inscription in hieratic script. According to Mostafa Waziri, the crumbling temple was decorated with palm leaf carvings and an incomplete sandstone panel that described a Roman emperor. Researcher Abdel Badie states more generally that the church contained ovens used to bake pottery, four rooms, a long hall, stairs, and stone tiles. Geography. Northern Tropic boundary The latitude of the city that would become Aswan – located at 24° 5′ 23″ – was an object of great interest to the ancient geographers and mathematicians. They believed that it was seated immediately under the tropic, and that on the day of the summer solstice, a vertically positioned staff cast no shadow. They noted that the sun's disc was reflected in a deep well (or pit) at noon. This statement is only approximately correct; at the summer solstice, the shadow was only of the staff, and so could scarcely be discerned, and the northern limb of the Sun's disc would be nearly vertical. More than 2200 years ago, Greek polymath Eratosthenes used this information to calculate earth's circumference. Climate. Aswan has a hot desert climate (Köppen climate classification "BWh") like the rest of Egypt. Aswan and Luxor have the hottest summer days of any city in Egypt. Aswan is one of the hottest, sunniest and driest cities in the world. Average high temperatures are consistently above during summer (June, July, August and also September) while average low temperatures remain above . Average high temperatures remain above during the coldest month of the year while average low temperatures remain above . Summers are very prolonged and extremely hot with blazing sunshine although desert heat is dry. Winters are brief and pleasantly mild, though nights may be cool at times. The climate of Aswan is extremely dry year-round, with less than of average annual precipitation. The desert city is one of the driest ones in the world, and rainfall doesn't occur every year, as of early 2001, the last rain there was seven years earlier. When heavy precipitation does occur, as in a November 2021 rain and hail storm, flash flooding can drive scorpions from their lairs to deadly effects. Aswan is one of the least humid cities on the planet, with an average relative humidity of only 26%, with a maximum mean of 42% during winter and a minimum mean of 16% during summer. The weather of Aswan is extremely clear, bright and sunny year-round, in all seasons, with a low seasonal variation, with almost 4,000 hours of annual sunshine, very close to the maximum theoretical sunshine duration. Aswan is one of the sunniest places on Earth. The highest record temperature was on July 4, 1918, and the lowest record temperature was on January 6, 1989. Infrastructure. Education. In 2012, the Aswan University was inaugurated, which is headquartered in the city. Aswan is also home to the Aswan Higher Institute of Social Work, which was established in 1975. Transport. The city is crossed by the Cape to Cairo Road, which connects it to Luxor and Cairo to the north, and Abu Simbel and Wadi Halfa to the south. Also important is the Aswan-Berenice highway, which connects with the ports of the Red Sea. Aswan is linked to Cairo by the Cape to Cairo Railway, which also connects it with Wadi Halfa. The railway is incomplete towards the south. Other key transport infrastructures are the Port of Aswan, the largest river port in the region, and Aswan International Airport. International relations. Twin towns/Sister cities. Aswan is twinned with:
2519
Adelaide of Italy
Adelaide of Italy (; 931 – 16 December 999 AD), also called Adelaide of Burgundy, was Holy Roman Empress by marriage to Emperor Otto the Great. She was crowned with him by Pope John XII in Rome on 2 February 962. She was the first empress designated "consors regni", denoting a "co-bearer of royalty" who shared power with her husband. She was essential as a model for future consorts regarding both status and political influence. She was regent of the Holy Roman Empire as the guardian of her grandson in 991–995. Life. Early life. Adelaide was born in Orbe Castle, Orbe, Kingdom of Upper Burgundy (now in modern-day Switzerland), to Rudolf II of Burgundy, a member of the Elder House of Welf, and Bertha of Swabia. Adelaide was involved from the beginning of the complicated fight to control not only Burgundy but also Lombardy. The battle between her father Rudolf II and Berengar I to control northern Italy ended with Berengar's death, and Rudolf could claim the throne. However, the inhabitants of Lombardy weren't happy with this outcome and called for help of another ally, Hugh of Provence, who had long considered Rudolf an enemy. Although Hugh challenged Rudolf for the Burgundian throne, he only succeeded when Adelaide's father died in 937. In order to be able to control Upper Burgundy Hugh decided to marry his son Lothair II, the nominal King of Italy, to the 15-year-old Adelaide (in 947, before 27 June). The marriage produced a daughter, Emma of Italy, born about 948. Emma became Queen of West Francia by marrying King Lothair of France. Marriage and alliance with Otto I. The calendar of saints states that Lothair was poisoned on 22 November 950 in Turin by the holder of real power, his successor, Berengar II of Italy. There were some suspicions amongst the people of Lombardy that Adelaide wanted to rule the kingdom by herself. Berengar attempted to thwart this and cement his political power by forcing her to marry his son Adalbert. Adelaide refused and fled, taking refuge in the castle of Como. Nevertheless, she was quickly tracked down and was imprisoned for four months at Garda. According to Adelaide's contemporary biographer, Odilo of Cluny, she managed to escape from captivity. After a time spent in the marshes nearby, she was rescued by a priest and taken to a "certain impregnable fortress," likely the fortified town of Canossa Castle near Reggio. She was able to send an emissary to the East Frankish king Otto I asking for his protection. Adelaide met Otto at the old Lombard capital of Pavia and they married on 23 September 951. Early in their marriage, Adelaide and Otto had two children, Henry and Bruno, both of whom died before reaching adulthood. A few years later, in 953, Liudolf, Duke of Swabia, Otto's son by his first marriage, instigated a big revolt against his father that was quelled by the latter. On account of this episode, Otto decided to dispossess Liudolf of his ducal title. This decision favoured the position of Adelaide and her descendants at court. Adelaide also managed to retain her entire territorial dowry. After returning to Germany with his new wife, Otto cemented the existence of the Holy Roman Empire by defeating the Hungarian invaders at the Battle of Lechfeld on 10 August 955. In addition, he extended the boundaries of East Francia beyond the Elbe River, defeating the Obotrites and other Slavs of the Elbe at the battle of Recknitz on 16 October 955. That same year, Adelaide gave birth to Otto II. In 955 or 956, she gave birth to Matilda, Abbess of Quedlinburg. Holy Roman Empress. Adelaide accompanied her husband on his second expedition to Italy to subdue the revolt of Berengar II and to protect Pope John XII. In Rome, Otto the Great was crowned Holy Roman Emperor on 2 February 962 by Pope John XII. Breaking tradition, Pope John XII also crowned Adelaide as Holy Roman Empress. In 960, a new "ordo" was created for her coronation and anointing, including prayers to biblical female figures, especially Esther. The "ordo" presents a theological and political concept that legitimizes the empress's status as a divinely ordained component of the earthly rule. In 966, Adelaide and the eleven-year-old Otto II, traveled again with Otto on his third expedition to Italy, where the Emperor restored the newly elected Pope John XIII to his throne (and executed some of the Roman rioters who had deposed him). The support of Adelaide (the legitimate heir to the Italian throne, which according to late Carolingian traditions would also denote legitimate claim to the imperial throne) and her extensive network of relations were crucial in ensuring Otto's legitimacy in his conquest of Italy and in bringing the imperial crown to the couple. Adelaide remained in Rome for six years while Otto ruled his kingdom from Italy. Otto II was crowned co-emperor in 967, then married the Byzantine princess Theophanu in April 972, resolving the conflict between the two empires in southern Italy and ensuring the imperial succession. Adelaide and her husband returned to Germany, where Otto died in May 973, at the same Memleben palace where his father had died 37 years earlier. After her coronation, which increased her power as she was now "consors regni" and able to receive people from the entire Empire, Adelaide's interventions in political decisions increased. According to Buchinger, "Between 962 and 972 Adelheid appears as intervenient in seventy-five charters. Additionally Adelheid and Otto I are named together in Papal bulls". She often protected the ecclesiastic institutions, seemingly to gain a sphere of influence separate from that of her husband. Between 991 and 993, the brothers of Feuchtwang wrote to her and requested to be "protected by the shadow of your rule from now on, we may be safe from the tumults of secular attacks". They promised they would pray for her so that her reign would be long and stable. Adelaide wielded a great amount of power during her husband's reign, as evidenced by the requests her made of her. A letter, written in the 980s by her daughter Emma demanded that Adelaide intervene against Emma's enemies and mobilize forces in the Ottonian Empire. She also asked that Adelaide capture Hugh Capet, who was already elected king of West Frankia in 987. Another enemy of Emma's was Charles, the brother of Emma's deceased consort Lothar, who had accused his sister-in-law of adultery. Another pleader was Gerbert of Aurillac, at that time archbishop of Reims (the later Pope Sylvester II), who wrote to Adelaide to ask for protection against his enemies. Buchinger remarks that, "These examples are remarkable, because they imply that Adelheid had the possibilities to help in both cases or at least Emma and Gerbert do believe that she could have intervened and succeeded. Both are themselves important political figures in their realm and still they rely on Adelheid. Adelheid’s power and importance must have been extremely stable and reliable to do as wished by the pleaders." Otto II's era. In the years following Otto's death, Adelaide exerted a powerful influence at court. However, Adelaide was in conflict with her daughter-in-law, the Byzantine princess Theophanu, as only one woman could be queen and hold the associated functions and powers at court. Adelaide was able to maintain the title "imperatrix augusta" even though Theophanu now also used it. Moreover, Theophanu opposed Adelaide in the use of her dowry lands, which Adelaide wanted to continue to use and donate to ecclesiastical institutions, ensuring her power base. Adelaide had the right to make transactions of her Italian lands as she pleased, but she needed the permission of the emperor to use her Ottonian lands. Adelaide also sided with her extended kin against Otto II. Wilson compares this action with those of other royal women: "Royal women possessed agency and did not always do the bidding of male relatives. Engelberge greatly influenced her husband, Emperor Louis II, in his attempts to extend imperial control to southern Italy in the 870s. Matilda’s favouritism for her younger son Heinrich caused Otto I considerable trouble, while Adelaide sided with her extended kin against her own son, Otto II, until he temporarily exiled her to Burgundy in 978. Agency was clearest during regencies, because these lacked formal rules, offering scope for forceful personalities to assert themselves." After being expelled from court by Otto II in 978, she divided her time between living in Italy in the royal palace of Pavia and Arles with her brother Conrad I, King of Burgundy, through whom she was finally reconciled with her son. In 983 (shortly before his death) Otto II appointed her his viceroy in Italy. Regency. In 983, her son Otto II died and was succeeded by Adelaide's grandson Otto III under the regency of his Theophanu while Adelaide remained in Italy. For some time, Adelaide and Theophanu were able to put aside their separate interests and work together to ensure Otto III's succession. This is seen through their joint appearance in the charters. According to the "Annales Quedlinburgenses", after Otto II's death, Henry, duke of Bavaria kidnapped Otto III. The narrative claims that Adelaide returned from Lombardy to join with Theophanu, Matilda, and other leaders of Europe and reclaim the child. When Theophanu died in 990, Adelaide assumed regency on behalf for Otto III until he reached legal majority four years later. Adelaide's role in establishing Otto's position can be seen in a letter Otto III wrote to his grandmother in 996: "According to your [Adelheid’s] wishes and desires, the divinity has conferred the rights of an empire on us [Otto III] with a happy outcome". Troubles in the East continued under Adelaide, as Boleslaus of Bohemia wavered in his loyalty. In 992, there was war between Bohemia and Poland , and again like in Theophanu's time, the Ottonian regime sided with Poland. Jestice comments that, "Christianity was not re-established in the land of the Liutizi during their lifetimes. But there were territorial gains, and by 987 it was possible to begin rebuilding destroyed fortresses along the Elbe". A Saxon army, with Otto III's presence, took Brandenburg in 991. The reports that there was another expedition in 992. Thietmar of Merseburg reports that Otto III dismissed his grandmother after his mother's death, but Althoff doubts this story. Even after Otto attained majority, Adelaide often accompanied him in his travels and influenced him, along with other women. In Burgundy, Adelaide's homeland, the counts and castellans behaved increasingly independently from their king Rudolph III. Just before her death in 999, she had to intervene in Burgundy to restore peace. Later years. Adelaide resigned as regent when Otto III was declared to be of the legal majority in 995. From then on, she devoted herself exclusively to her works of charity, in particular to the foundation and restoration of religious houses, i.e. monasteries, churches and abbeys. Adelaide had long entertained close relations with Cluny, then the center of the movement for ecclesiastical reform, and in particular with its abbots Majolus and Odilo. She retired to a nunnery she had founded in c. 991 at Selz in Alsace. On her way to Burgundy to support her nephew Rudolf III against a rebellion, she died at Selz Abbey on 16 December 999, days short of the millennium she thought would bring the Second Coming of Christ. She was buried in the Abbey and Pope Urban II canonized her in 1097. After serious flooding, which almost completely destroyed it in 1307, Adelaide's relics were moved elsewhere. A goblet reputed to have belonged to Saint Adelaide has long been preserved in Seltz.; it was used to give potions to people with fever and the healings were said to have been numerous. Adelaide constantly devoted herself to the service of the church and peace, and to the empire as guardian of both; she also interested herself in the conversion of the Slavs. She was thus a principal agent — almost an embodiment — of the work of the pre-schism Church at the end of the Early Middle Ages in the construction of the religious culture of Central Europe. Some of her relics are preserved in a shrine in Hanover. Her feast day, 16 December, is still kept in many German dioceses. Issue. In 947, Adelaide was married to King Lothair II of Italy. The union produced one child: In 951, Adelaide was married to King Otto I, the future Holy Roman Emperor. The union produced four children: Historiography and cultural depictions. Historiography. Adelaide was one of the most important and powerful medieval female rulers. Historically, as empress and saint, she has been described as powerful, with both male attributes (like strength, justness and prudence) and female attributes (piety, self denying). Modern German historiography tends to focus on her contributions to the Ottonian dynasty and the development of the Holy Roman Empire. Depictions in art. Adelaide is usually represented in the garb of an empress, with sceptre and crown. Since the 14th century, she is also given as an attribute a model church or a ship (with which it is said to have escaped from captivity). The most famous representation of Adelaide in German art belongs to a group of sandstone figures in the choir of Meissen Cathedral, which was created around 1260. She is shown here with her husband, who was not canonized, since he founded the diocese of Meissen with her.
2524
Airbus A300
The Airbus A300 is a wide-body airliner developed and manufactured by Airbus. In September 1967, aircraft manufacturers in the United Kingdom, France, and West Germany signed a memorandum of understanding to develop a large airliner. West Germany and France reached an agreement on 29 May 1969 after the British withdrew from the project on 10 April 1969. European collaborative aerospace manufacturer Airbus Industrie was formally created on 18 December 1970 to develop and produce it. The prototype first flew on 28 October 1972. The first twin-engine widebody airliner, the A300 typically seats 247 passengers in two classes over a range of 5,375 to 7,500 km (2,900 to 4,050 nmi; ). Initial variants are powered by General Electric CF6-50 or Pratt & Whitney JT9D turbofans and have a three-crew flight deck. The improved A300-600 has a two-crew cockpit and updated CF6-80C2 or PW4000 engines; it made its first flight on 8 July 1983 and entered service later that year. The A300 is the basis of the smaller A310 (first flown in 1982) and was adapted in a freighter version. Its cross section was retained for the larger four-engined A340 (1991) and the larger twin-engined A330 (1992). It is also the basis for the oversize Beluga transport (1994). Launch customer Air France introduced the type on 23 May 1974. After limited demand initially, sales took off as the type was proven in early service, beginning three decades of steady orders. It has a similar capacity to the Boeing 767-300, introduced in 1986, but lacked the 767-300ER range. During the 1990s, the A300 became popular with cargo aircraft operators, as both passenger airliner conversions and as original builds. Production ceased in July 2007 after 561 deliveries. , there were 228 A300 family aircraft in commercial service. Development. Origins. During the 1960s, European aircraft manufacturers such as Hawker Siddeley and the British Aircraft Corporation, based in the UK, and Sud Aviation of France, had ambitions to build a new 200-seat airliner for the growing civil aviation market. While studies were performed and considered, such as a stretched twin-engine variant of the Hawker Siddeley Trident and an expanded development of the British Aircraft Corporation (BAC) One-Eleven, designated the BAC Two-Eleven, it was recognized that if each of the European manufacturers were to launch similar aircraft into the market at the same time, neither would achieve sales volume needed to make them viable. In 1965, a British government study, known as the Plowden Report, had found British aircraft production costs to be between 10% and 20% higher than American counterparts due to shorter production runs, which was in part due to the fractured European market. To overcome this factor, the report recommended the pursuit of multinational collaborative projects between the region's leading aircraft manufacturers. European manufacturers were keen to explore prospective programs; the proposed 260-seat wide-body "HBN 100" between Hawker Siddeley, Nord Aviation, and Breguet Aviation being one such example. National governments were also keen to support such efforts amid a belief that American manufacturers could dominate the European Economic Community; in particular, Germany had ambitions for a multinational airliner project to invigorate its aircraft industry, which had declined considerably following the Second World War. During the mid-1960s, both Air France and American Airlines had expressed interest in a short-haul twin-engine wide-body aircraft, indicating a market demand for such an aircraft to be produced. In July 1967, during a high-profile meeting between French, German, and British ministers, an agreement was made for greater cooperation between European nations in the field of aviation technology, and "for the joint development and production of an airbus". The word "airbus" at this point was a generic aviation term for a larger commercial aircraft, and was considered acceptable in multiple languages, including French. Shortly after the July 1967 meeting, French engineer Roger Béteille was appointed as the technical director of what would become the A300 program, while Henri Ziegler, chief operating office of Sud Aviation, was appointed as the general manager of the organization and German politician Franz Josef Strauss became the chairman of the supervisory board. Béteille drew up an initial work share plan for the project, under which French firms would produce the aircraft's cockpit, the control systems, and lower-center portion of the fuselage, Hawker Siddeley would manufacture the wings, while German companies would produce the forward, rear and upper part of the center fuselage sections. Addition work included moving elements of the wings being produced in the Netherlands, and Spain producing the horizontal tail plane. An early design goal for the A300 that Béteille had stressed the importance of was the incorporation of a high level of technology, which would serve as a decisive advantage over prospective competitors. As such, the A300 would feature the first use of composite materials of any passenger aircraft, the leading and trailing edges of the tail fin being composed of glass fibre reinforced plastic. Béteille opted for English as the working language for the developing aircraft, as well against using Metric instrumentation and measurements, as most airlines already had US-built aircraft. These decisions were partially influenced by feedback from various airlines, such as Air France and Lufthansa, as an emphasis had been placed on determining the specifics of what kind of aircraft that potential operators were seeking. According to Airbus, this cultural approach to market research had been crucial to the company's long-term success. Workshare and redefinition. On 26 September 1967, the British, French, and West German governments signed a Memorandum of Understanding to start development of the 300-seat Airbus A300. At this point, the A300 was only the second major joint aircraft programme in Europe, the first being the Anglo-French Concorde. Under the terms of the memorandum, Britain and France were each to receive a 37.5 per cent work share on the project, while Germany received a 25 per cent share. Sud Aviation was recognized as the lead company for A300, with Hawker Siddeley being selected as the British partner company. At the time, the news of the announcement had been clouded by the British Government's support for the Airbus, which coincided with its refusal to back BAC's proposed competitor, the BAC 2–11, despite a preference for the latter expressed by British European Airways (BEA). Another parameter was the requirement for a new engine to be developed by Rolls-Royce to power the proposed airliner; a derivative of the in-development Rolls-Royce RB211, the triple-spool RB207, capable of producing of . The program cost was US$4.6 billion (in 1993 Dollars). In December 1968, the French and British partner companies (Sud Aviation and Hawker Siddeley) proposed a revised configuration, the 250-seat Airbus A250. It had been feared that the original 300-seat proposal was too large for the market, thus it had been scaled down to produce the A250. The dimensional changes involved in the shrink reduced the length of the fuselage by and the diameter by , reducing the overall weight by . For increased flexibility, the cabin floor was raised so that standard LD3 freight containers could be accommodated side-by-side, allowing more cargo to be carried. Refinements made by Hawker Siddeley to the wing's design provided for greater lift and overall performance; this gave the aircraft the ability to climb faster and attain a level cruising altitude sooner than any other passenger aircraft. It was later renamed the A300B. Perhaps the most significant change of the A300B was that it would not require new engines to be developed, being of a suitable size to be powered by Rolls-Royce's RB211, or alternatively the American Pratt & Whitney JT9D and General Electric CF6 powerplants; this switch was recognized as considerably reducing the project's development costs. To attract potential customers in the US market, it was decided that General Electric CF6-50 engines would power the A300 in place of the British RB207; these engines would be produced in co-operation with French firm Snecma. By this time, Rolls-Royce had been concentrating their efforts upon developing their RB211 turbofan engine instead and progress on the RB207's development had been slow for some time, the firm having suffered due to funding limitations, both of which had been factors in the engine switch decision. On 10 April 1969, a few months after the decision to drop the RB207 had been announced, the British government announced that they would withdraw from the Airbus venture. In response, West Germany proposed to France that they would be willing to contribute up to 50% of the project's costs if France was prepared to do the same. Additionally, the managing director of Hawker Siddeley, Sir Arnold Alexander Hall, decided that his company would remain in the project as a favoured sub-contractor, developing and manufacturing the wings for the A300, which would later become pivotal in later versions' impressive performance from short domestic to long intercontinental flights. Hawker Siddeley spent £35 million of its own funds, along with a further £35 million loan from the West German government, on the machine tooling to design and produce the wings. Programme launch. On 29 May 1969, during the Paris Air Show, French transport minister Jean Chamant and German economics minister Karl Schiller signed an agreement officially launching the Airbus A300, the world's first twin-engine widebody airliner. The intention of the project was to produce an aircraft that was smaller, lighter, and more economical than its three-engine American rivals, the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. In order to meet Air France's demands for an aircraft larger than 250-seat A300B, it was decided to stretch the fuselage to create a new variant, designated as the A300B2, which would be offered alongside the original 250-seat A300B, henceforth referred to as the A300B1. On 3 September 1970, Air France signed a letter of intent for six A300s, marking the first order to be won for the new airliner. In the aftermath of the Paris Air Show agreement, it was decided that, in order to provide effective management of responsibilities, a Groupement d'intérêt économique would be established, allowing the various partners to work together on the project while remaining separate business entities. On 18 December 1970, Airbus Industrie was formally established following an agreement between Aérospatiale (the newly merged Sud Aviation and Nord Aviation) of France and the antecedents to Deutsche Aerospace of Germany, each receiving a 50 per cent stake in the newly formed company. In 1971, the consortium was joined by a third full partner, the Spanish firm CASA, who received a 4.2 per cent stake, the other two members reducing their stakes to 47.9 per cent each. In 1979, Britain joined the Airbus consortium via British Aerospace, which Hawker Siddeley had merged into, which acquired a 20 per cent stake in Airbus Industrie with France and Germany each reducing their stakes to 37.9 per cent. Prototype and flight testing. Airbus Industrie was initially headquartered in Paris, which is where design, development, flight testing, sales, marketing, and customer support activities were centered; the headquarters was relocated to Toulouse in January 1974. The final assembly line for the A300 was located adjacent to Toulouse Blagnac International Airport. The manufacturing process necessitated transporting each aircraft section being produced by the partner companies scattered across Europe to this one location. The combined use of ferries and roads were used for the assembly of the first A300, however this was time-consuming and not viewed as ideal by Felix Kracht, Airbus Industrie's production director. Kracht's solution was to have the various A300 sections brought to Toulouse by a fleet of Boeing 377-derived Aero Spacelines Super Guppy aircraft, by which means none of the manufacturing sites were more than two hours away. Having the sections airlifted in this manner made the A300 the first airliner to use just-in-time manufacturing techniques, and allowed each company to manufacture its sections as fully equipped, ready-to-fly assemblies. In September 1969, construction of the first prototype A300 began. On 28 September 1972, this first prototype was unveiled to the public, it conducted its maiden flight from Toulouse–Blagnac International Airport on 28 October that year. This maiden flight, which was performed a month ahead of schedule, lasted for one hour and 25 minutes; the captain was Max Fischl and the first officer was Bernard Ziegler, son of Henri Ziegler. In 1972, unit cost was US$17.5M. On 5 February 1973, the second prototype performed its maiden flight. The flight test program, which involved a total of four aircraft, was relatively problem-free, accumulating 1,580 flight hours throughout. In September 1973, as part of promotional efforts for the A300, the new aircraft was taken on a six-week tour around North America and South America, to demonstrate it to airline executives, pilots, and would-be customers. Amongst the consequences of this expedition, it had allegedly brought the A300 to the attention of Frank Borman of Eastern Airlines, one of the "big four" U.S. airlines. Entry into service. On 15 March 1974, type certificates were granted for the A300 from both German and French authorities, clearing the way for its entry into revenue service. On 23 May 1974, Federal Aviation Administration (FAA) certification was received. The first production model, the A300B2, entered service in 1974, followed by the A300B4 one year later. Initially, the success of the consortium was poor, in part due to the economic consequences of the 1973 oil crisis, but by 1979 there were 81 A300 passenger liners in service with 14 airlines, alongside 133 firm orders and 88 options. Ten years after the official launch of the A300, the company had achieved a 26 per cent market share in terms of dollar value, enabling Airbus Industries to proceed with the development of its second aircraft, the Airbus A310. Design. The Airbus A300 is a wide-body medium-to-long range airliner; it has the distinction of being the first twin-engine wide-body aircraft in the world. In 1977, the A300 became the first Extended Range Twin Operations (ETOPS)-compliant aircraft, due to its high performance and safety standards. Another world-first of the A300 is the use of composite materials on a commercial aircraft, which were used on both secondary and later primary airframe structures, decreasing overall weight and improving cost-effectiveness. Other firsts included the pioneering use of center-of-gravity control, achieved by transferring fuel between various locations across the aircraft, and electrically signaled secondary flight controls. The A300 is powered by a pair of underwing turbofan engines, either General Electric CF6 or Pratt & Whitney JT9D engines; the sole use of underwing engine pods allowed for any suitable turbofan engine to be more readily used. The lack of a third tail-mounted engine, as per the trijet configuration used by some competing airliners, allowed for the wings to be located further forwards and to reduce the size of the vertical stabilizer and elevator, which had the effect of increasing the aircraft's flight performance and fuel efficiency. Airbus partners had employed the latest technology, some of which having been derived from Concorde, on the A300. According to Airbus, new technologies adopted for the airliner were selected principally for increased safety, operational capability, and profitability. Upon entry into service in 1974, the A300 was a very advanced plane, which went on to influence later airliner designs. The technological highlights include advanced wings by de Havilland (later BAE Systems) with supercritical airfoil sections for economical performance and advanced aerodynamically efficient flight control surfaces. The diameter circular fuselage section allows an eight-abreast passenger seating and is wide enough for 2 LD3 cargo containers side by side. Structures are made from metal billets, reducing weight. It is the first airliner to be fitted with wind shear protection. Its advanced autopilots are capable of flying the aircraft from climb-out to landing, and it has an electrically controlled braking system. Later A300s incorporated other advanced features such as the Forward-Facing Crew Cockpit (FFCC), which enabled a two-pilot flight crew to fly the aircraft alone without the need for a flight engineer, the functions of which were automated; this two-man cockpit concept was a world-first for a wide-body aircraft. Glass cockpit flight instrumentation, which used cathode ray tube (CRT) monitors to display flight, navigation, and warning information, along with fully digital dual autopilots and digital flight control computers for controlling the spoilers, flaps, and leading-edge slats, were also adopted upon later-built models. Additional composites were also made use of, such as carbon-fiber-reinforced polymer (CFRP), as well as their presence in an increasing proportion of the aircraft's components, including the spoilers, rudder, air brakes, and landing gear doors. Another feature of later aircraft was the addition of wingtip fences, which improved aerodynamic performance and thus reduced cruise fuel consumption by about 1.5% for the A300-600. In addition to passenger duties, the A300 became widely used by air freight operators; according to Airbus, it is the best selling freight aircraft of all time. Various variants of the A300 were built to meet customer demands, often for diverse roles such as aerial refueling tankers, freighter models (new-build and conversions), combi aircraft, military airlifter, and VIP transport. Perhaps the most visually unique of the variants is the A300-600ST Beluga, an oversize cargo-carrying model operated by Airbus to carry aircraft sections between their manufacturing facilities. The A300 was the basis for, and retained a high level of commonality with, the second airliner produced by Airbus, the smaller Airbus A310. Operational history. On 23 May 1974, the first A300 to enter service performed the first commercial flight of the type, flying from Paris to London, for Air France. Immediately after the launch, sales of the A300 were weak for some years, with most orders going to airlines that had an obligation to favor the domestically made product – notably Air France and Lufthansa, the first two airlines to place orders for the type. Following the appointment of Bernard Lathière as Henri Ziegler's replacement, an aggressive sales approach was adopted. Indian Airlines was the world's first domestic airline to purchase the A300, ordering three aircraft with three options. However, between December 1975 and May 1977, there were no sales for the type. During this period a number of "whitetail" A300s – completed but unsold aircraft – were completed and stored at Toulouse, and production fell to half an aircraft per month amid calls to pause production completely. During the flight testing of the A300B2, Airbus held a series of talks with Korean Air on the topic of developing a longer-range version of the A300, which would become the A300B4. In September 1974, Korean Air placed an order for four A300B4s with options for two further aircraft; this sale was viewed as significant as it was the first non-European international airline to order Airbus aircraft. Airbus had viewed South-East Asia as a vital market that was ready to be opened up and believed Korean Air to be the 'key'. Airlines operating the A300 on short haul routes were forced to reduce frequencies to try and fill the aircraft. As a result, they lost passengers to airlines operating more frequent narrow body flights. Eventually, Airbus had to build its own narrowbody aircraft (the A320) to compete with the Boeing 737 and McDonnell Douglas DC-9/MD-80. The savior of the A300 was the advent of ETOPS, a revised FAA rule which allows twin-engine jets to fly long-distance routes that were previously off-limits to them. This enabled Airbus to develop the aircraft as a medium/long range airliner. In 1977, US carrier Eastern Air Lines leased four A300s as an in-service trial. CEO Frank Borman was impressed that the A300 consumed 30% less fuel, even less than expected, than his fleet of L-1011s. Borman proceeded to order 23 A300s, becoming the first U.S. customer for the type. This order is often cited as the point at which Airbus came to be seen as a serious competitor to the large American aircraft-manufacturers Boeing and McDonnell Douglas. Aviation author John Bowen alleged that various concessions, such as loan guarantees from European governments and compensation payments, were a factor in the decision as well. The Eastern Air Lines breakthrough was shortly followed by an order from Pan Am. From then on, the A300 family sold well, eventually reaching a total of 561 delivered aircraft. In December 1977, Aerocondor Colombia became the first Airbus operator in Latin America, leasing one Airbus A300B4-2C, named "Ciudad de Barranquilla". During the late 1970s, Airbus adopted a so-called 'Silk Road' strategy, targeting airlines in the Far East. As a result, The aircraft found particular favor with Asian airlines, being bought by Japan Air System, Korean Air, China Eastern Airlines, Thai Airways International, Singapore Airlines, Malaysia Airlines, Philippine Airlines, Garuda Indonesia, China Airlines, Pakistan International Airlines, Indian Airlines, Trans Australia Airlines and many others. As Asia did not have restrictions similar to the FAA 60-minutes rule for twin-engine airliners which existed at the time, Asian airlines used A300s for routes across the Bay of Bengal and South China Sea. In 1977, the A300B4 became the first ETOPS compliant aircraft, qualifying for Extended Twin Engine Operations over water, providing operators with more versatility in routing. In 1982 Garuda Indonesia became the first airline to fly the A300B4-200FFCC. By 1981, Airbus was growing rapidly, with over 400 aircraft sold to over forty airlines. In 1989, Chinese operator China Eastern Airlines received its first A300; by 2006, the airline operated around 18 A300s, making it the largest operator of both the A300 and the A310 at that time. On 31 May 2014, China Eastern officially retired the last A300-600 in its fleet, having begun drawing down the type in 2010. From 1997 to 2014, a single A300, designated A300 Zero-G, was operated by the European Space Agency (ESA), centre national d'études spatiales (CNES) and the German Aerospace Center (DLR) as a reduced-gravity aircraft for conducting research into microgravity; the A300 is the largest aircraft to ever have been used in this capacity. A typical flight would last for two and a half hours, enabling up to 30 parabolas to be performed per flight. By the 1990s, the A300 was being heavily promoted as a cargo freighter. The largest freight operator of the A300 is FedEx Express, which has 65 A300 aircraft in service as of May 2022. UPS Airlines also operates 52 freighter versions of the A300. The final version was the A300-600R and is rated for 180-minute ETOPS. The A300 has enjoyed renewed interest in the secondhand market for conversion to freighters; large numbers were being converted during the late 1990s. The freighter versions – either new-build A300-600s or converted ex-passenger A300-600s, A300B2s and B4s – account for most of the world's freighter fleet after the Boeing 747 freighter. The A300 provided Airbus the experience of manufacturing and selling airliners competitively. The basic fuselage of the A300 was later stretched (A330 and A340), shortened (A310), or modified into derivatives (A300-600ST "Beluga" Super Transporter). In 2006, unit cost of an −600F was $105 million. In March 2006, Airbus announced the impending closure of the A300/A310 final assembly line, making them the first Airbus aircraft to be discontinued. The final production A300, an A300F freighter, performed its initial flight on 18 April 2007, and was delivered to FedEx Express on 12 July 2007. Airbus has announced a support package to keep A300s flying commercially. Airbus offers the A330-200F freighter as a replacement for the A300 cargo variants. The life of UPS's fleet of 52 A300s, delivered from 2000 to 2006, will be extended to 2035 by a flight deck upgrade based around Honeywell Primus Epic avionics; new displays and flight management system (FMS), improved weather radar, a central maintenance system, and a new version of the current enhanced ground proximity warning system. With a light usage of only two to three cycles per day, it will not reach the maximum number of cycles by then. The first modification will be made at Airbus Toulouse in 2019 and certified in 2020. As of July 2017, there are 211 A300s in service with 22 operators, with the largest operator being FedEx Express with 68 A300-600F aircraft. Variants. A300B1. The A300B1 was the first variant to take flight. It had a maximum takeoff weight (MTOW) of , was long and was powered by two General Electric CF6-50A engines. Only two prototypes of the variant were built before it was adapted into the A300B2, the first production variant of the airliner. The second prototype was leased to Trans European Airways in 1974. A300B2. A300B2-100. Responding to a need for more seats from Air France, Airbus decided that the first production variant should be larger than the original prototype A300B1. The CF6-50A powered A300B2-100 was longer than the A300B1 and had an increased MTOW of , allowing for 30 additional seats and bringing the typical passenger count up to 281, with capacity for 20 LD3 containers. Two prototypes were built and the variant made its maiden flight on 28 June 1973, became certified on 15 March 1974 and entered service with Air France on 23 May 1974. A300B2-200. For the A300B2-200, originally designated as the A300B2K, Krueger flaps were introduced at the leading-edge root, the slat angles were reduced from 20 degrees to 16 degrees, and other lift related changes were made in order to introduce a high-lift system. This was done to improve performance when operating at high-altitude airports, where the air is less dense and lift generation is reduced. The variant had an increased MTOW of and was powered by CF6-50C engines, was certified on 23 June 1976, and entered service with South African Airways in November 1976. CF6-50C1 and CF6-50C2 models were also later fitted depending on customer requirements, these became certified on 22 February 1978 and 21 February 1980 respectively. A300B2-320. The A300B2-320 introduced the Pratt & Whitney JT9D powerplant and was powered by JT9D-59A engines. It retained the MTOW of the B2-200, was certified on 4 January 1980, and entered service with Scandinavian Airlines on 18 February 1980, with only four being produced. A300B4. A300B4-100. The initial A300B4 variant, later named the A300B4-100, included a centre fuel tank for an increased fuel capacity of , and had an increased MTOW of . It also featured Krueger flaps and had a similar high-lift system to what was later fitted to the A300B2-200. The variant made its maiden flight on 26 December 1974, was certified on 26 March 1975, and entered service with Germanair in May 1975. A300B4-200. The A300B4-200 had an increased MTOW of and featured an additional optional fuel tank in the rear cargo hold, which would reduce the cargo capacity by two LD3 containers. The variant was certified on 26 April 1979. A300-600. The A300-600, officially designated as the A300B4-600, was slightly longer than the A300B2 and A300B4 variants and had an increased interior space from using a similar rear fuselage to the Airbus A310, this allowed it to have two additional rows of seats. It was initially powered by Pratt & Whitney JT9D-7R4H1 engines, but was later fitted with General Electric CF6-80C2 engines, with Pratt & Whitney PW4156 or PW4158 engines being introduced in 1986. Other changes include an improved wing featuring a recambered trailing edge, the incorporation of simpler single-slotted Fowler flaps, the deletion of slat fences, and the removal of the outboard ailerons after they were deemed unnecessary on the A310. The variant made its first flight on 8 July 1983, was certified on 9 March 1984, and entered service in June 1984 with Saudi Arabian Airlines. A total of 313 A300-600s (all versions) have been sold. The A300-600 uses the A310 cockpits, featuring digital technology and electronic displays, eliminating the need for a flight engineer. The FAA issues a single type rating which allows operation of both the A310 and A300-600. A300B10 (A310). Airbus had demand for an aircraft smaller than the A300. On 7 July 1978, the A310 (initially the A300B10) was launched with orders from Swissair and Lufthansa. On 3 April 1982, the first prototype conducted its maiden flight and it received its type certification on 11 March 1983. Keeping the same eight-abreast cross-section, the A310 is shorter than the initial A300 variants, and has a smaller wing, down from . The A310 introduced a two-crew glass cockpit, later adopted for the A300-600 with a common type rating. It was powered by the same GE CF6-80 or Pratt & Whitney JT9D then PW4000 turbofans. It can seat 220 passengers in two classes, or 240 in all-economy, and can fly up to . It has overwing exits between the two main front and rear door pairs. In April 1983, the aircraft entered revenue service with Swissair and competed with the Boeing 767–200, introduced six months before. Its longer range and ETOPS regulations allowed it to be operated on transatlantic flights. Until the last delivery in June 1998, 255 aircraft were produced, as it was succeeded by the larger Airbus A330-200. It has cargo aircraft versions, and was derived into the Airbus A310 MRTT military tanker/transport. A300-600ST. Commonly referred to as the Airbus Beluga or "Airbus Super Transporter," these five airframes are used by Airbus to ferry parts between the company's disparate manufacturing facilities, thus enabling workshare distribution. They replaced the four Aero Spacelines Super Guppys previously used by Airbus. ICAO code: A3ST Operators. , there were 228 A300 family aircraft in commercial service. The five largest operators were FedEx Express (70), UPS Airlines (52), European Air Transport Leipzig (23), Iran Air (11), and Mahan Air (11). Accidents and incidents. As of June 2021, the A300 has been involved in 77 occurrences including 24 hull-loss accidents causing 1133 fatalities, and criminal occurrences and hijackings causing fatalities. Aircraft on display. Four A300s are currently preserved:
2526
Agostino Carracci
Agostino Carracci (or Caracci; 16 August 1557 – 22 March 1602) was an Italian painter, printmaker, tapestry designer, and art teacher. He was, together with his brother, Annibale Carracci, and cousin, Ludovico Carracci, one of the founders of the Accademia degli Incamminati (Academy of the Progressives) in Bologna. This teaching academy promoted the Carracci emphasized drawing from life. It promoted progressive tendencies in art and was a reaction to the Mannerist distortion of anatomy and space. The academy helped propel painters of the School of Bologna to prominence. Life. Agostino Carracci was born in Bologna as the son of a tailor. He was the elder brother of Annibale Carracci and the cousin of Ludovico Carracci. He initially trained as a goldsmith. He later studied painting, first with Prospero Fontana, who had been Lodovico's master, and later with Bartolomeo Passarotti. He traveled to Parma to study the works of Correggio. Accompanied by his brother Annibale, he spent a long time in Venice, where he trained as an engraver under the renowned Cornelis Cort. Starting from 1574 he worked as a reproductive engraver, copying works of 16th century masters such as Federico Barocci, Tintoretto, Antonio Campi, Veronese and Correggio. He also produced some original prints, including two etchings. He traveled to Venice (1582, 1587–1589) and Parma (1586–1587). Together with Annibale and Ludovico he worked in Bologna on the fresco cycles in Palazzo Fava ("Histories of Jason and Medea", 1584) and Palazzo Magnani ("Histories of Romulus", 1590–1592). In 1592 he also painted the "Communion of St. Jerome", now in the Pinacoteca di Bologna and considered his masterwork. In 1620, Giovanni Lanfranco, a pupil of the Carracci, famously accused another Carracci student, Domenichino, of plagiarizing this painting. From 1586 is his altarpiece of the "Madonna with Child and Saints", in the National Gallery of Parma. In 1598 Carracci joined his brother Annibale in Rome, to collaborate on the decoration of the Gallery in Palazzo Farnese. From 1598 to 1600 is a "triple Portrait", now in Naples, an example of genre painting. In 1600 he was called to Parma by Duke Ranuccio I Farnese to begin the decoration of the Palazzo del Giardino, but he died before it was finished. His friend the poet Claudio Achillini dedicated to him an epitaph after his death, later published by Carlo Cesare Malvasia in the life of the Carracci. Agostino's son Antonio Carracci was also a painter, and attempted to compete with his father's Academy. An engraving by Agostino Carraci after the painting "Love in the Golden Age" by the 16th-century Flemish painter Paolo Fiammingo was the inspiration for Matisse's "Le bonheur de vivre" (Joy of Life). Works. "Oil on canvas unless otherwise noted"
2528
Adenylyl cyclase
Adenylate cyclase (EC 4.6.1.1, also commonly known as adenyl cyclase and adenylyl cyclase, abbreviated AC) is an enzyme with systematic name ATP diphosphate-lyase (cyclizing; 3′,5′-cyclic-AMP-forming). It catalyzes the following reaction: It has key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues. All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters. Classes. Class I. The first class of adenylyl cyclases occur in many bacteria including "E. coli" (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that "E. coli" deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing other sugars. cAMP exerts this effect by binding the transcription factor CRP, also known as CAP. Class I AC's are large cytosolic enzymes (~100 kDa) with a large regulatory domain (~50 kDa) that indirectly senses glucose levels. , no crystal structure is available for class I AC. Some indirect structural information is available for this class. It is known that the N-terminal half is the catalytic portion, and that it requires two Mg2+ ions. S103, S113, D114, D116 and W118 are the five absolutely essential residues. The class I catalytic domain () belongs to the same superfamily () as the palm domain of DNA polymerase beta (). Aligning its sequence onto the structure onto a related archaeal CCA tRNA nucleotidyltransferase () allows for assignment of the residues to specific functions: γ-phosphate binding, structural stabilization, DxD motif for metal ion binding, and finally ribose binding. Class II. These adenylyl cyclases are toxins secreted by pathogenic bacteria such as "Bacillus anthracis", "Bordetella pertussis", "Pseudomonas aeruginosa", and "Vibrio vulnificus" during infections. These bacteria also secrete proteins that enable the AC-II to enter host cells, where the exogenous AC activity undermines normal cellular processes. The genes for Class II ACs are known as cyaA, one of which is anthrax toxin. Several crystal structures are known for AC-II enzymes. Class III. These adenylyl cyclases are the most familiar based on extensive study due to their important roles in human health. They are also found in some bacteria, notably "Mycobacterium tuberculosis" where they appear to have a key role in pathogenesis. Most AC-III's are integral membrane proteins involved in transducing extracellular signals into intracellular responses. A Nobel Prize was awarded to Earl Sutherland in 1971 for discovering the key role of AC-III in human liver, where adrenaline indirectly stimulates AC to mobilize stored energy in the "fight or flight" response. The effect of adrenaline is via a G protein signaling cascade, which transmits chemical signals from outside the cell across the membrane to the inside of the cell (cytoplasm). The outside signal (in this case, adrenaline) binds to a receptor, which transmits a signal to the G protein, which transmits a signal to adenylyl cyclase, which transmits a signal by converting adenosine triphosphate to cyclic adenosine monophosphate (cAMP). cAMP is known as a second messenger. Cyclic AMP is an important molecule in eukaryotic signal transduction, a so-called second messenger. Adenylyl cyclases are often activated or inhibited by G proteins, which are coupled to membrane receptors and thus can respond to hormonal or other stimuli. Following activation of adenylyl cyclase, the resulting cAMP acts as a second messenger by interacting with and regulating other proteins such as protein kinase A and cyclic nucleotide-gated ion channels. Photoactivated adenylyl cyclase (PAC) was discovered in "Euglena gracilis" and can be expressed in other organisms through genetic manipulation. Shining blue light on a cell containing PAC activates it and abruptly increases the rate of conversion of ATP to cAMP. This is a useful technique for researchers in neuroscience because it allows them to quickly increase the intracellular cAMP levels in particular neurons, and to study the effect of that increase in neural activity on the behavior of the organism. A green-light activated rhodopsin adenylyl cyclase (CaRhAC) has recently been engineered by modifying the nucleotide binding pocket of rhodopsin guanylyl cyclase. Structure. Most class III adenylyl cyclases are transmembrane proteins with 12 transmembrane segments. The protein is organized with 6 transmembrane segments, then the C1 cytoplasmic domain, then another 6 membrane segments, and then a second cytoplasmic domain called C2. The important parts for function are the N-terminus and the C1 and C2 regions. The C1a and C2a subdomains are homologous and form an intramolecular 'dimer' that forms the active site. In "Mycobacterium tuberculosis" and many other bacterial cases, the AC-III polypeptide is only half as long, comprising one 6-transmembrane domain followed by a cytoplasmic domain, but two of these form a functional homodimer that resembles the mammalian architecture with two active sites. In non-animal class III ACs, the catalytic cytoplasmic domain is seen associated with other (not necessarily transmembrane) domains. Class III adenylyl cyclase domains can be further divided into four subfamilies, termed class IIIa through IIId. Animal membrane-bound ACs belong to class IIIa. Mechanism. The reaction happens with two metal cofactors (Mg or Mn) coordinated to the two aspartate residues on C1. They perform a nucleophilic attack of the 3'-OH group of the ribose on the α-phosphoryl group of ATP. The two lysine and aspartate residues on C2 selects ATP over GTP for the substrate, so that the enzyme is not a guanylyl cyclase. A pair of arginine and asparagine residues on C2 stabilizes the transition state. In many proteins, these residues are nevertheless mutated while retaining the adenylyl cyclase activity. Types. There are ten known isoforms of adenylyl cyclases in mammals: These are also sometimes called simply AC1, AC2, etc., and, somewhat confusingly, sometimes Roman numerals are used for these isoforms that all belong to the overall AC class III. They differ mainly in how they are regulated, and are differentially expressed in various tissues throughout mammalian development. Regulation. Adenylyl cyclase is regulated by G proteins, which can be found in the monomeric form or the heterotrimeric form, consisting of three subunits. Adenylyl cyclase activity is controlled by heterotrimeric G proteins. The inactive or inhibitory form exists when the complex consists of alpha, beta, and gamma subunits, with GDP bound to the alpha subunit. In order to become active, a ligand must bind to the receptor and cause a conformational change. This conformational change causes the alpha subunit to dissociate from the complex and become bound to GTP. This G-alpha-GTP complex then binds to adenylyl cyclase and causes activation and the release of cAMP. Since a good signal requires the help of enzymes, which turn on and off signals quickly, there must also be a mechanism in which adenylyl cyclase deactivates and inhibits cAMP. The deactivation of the active G-alpha-GTP complex is accomplished rapidly by GTP hydrolysis due to the reaction being catalyzed by the intrinsic enzymatic activity of GTPase located in the alpha subunit. It is also regulated by forskolin, as well as other isoform-specific effectors: In neurons, calcium-sensitive adenylyl cyclases are located next to calcium ion channels for faster reaction to Ca2+ influx; they are suspected of playing an important role in learning processes. This is supported by the fact that adenylyl cyclases are "coincidence detectors", meaning that they are activated only by several different signals occurring together. In peripheral cells and tissues adenylyl cyclases appear to form molecular complexes with specific receptors and other signaling proteins in an isoform-specific manner. Function. Individual transmembrane adenylyl cyclase isoforms have been linked to numerous physiological functions. Soluble adenylyl cyclase (sAC, AC10) has a critical role in sperm motility. Adenylyl cyclase has been implicated in memory formation, functioning as a coincidence detector. Class IV. AC-IV was first reported in the bacterium "Aeromonas hydrophila", and the structure of the AC-IV from "Yersinia pestis" has been reported. These are the smallest of the AC enzyme classes; the AC-IV (CyaB) from "Yersinia" is a dimer of 19 kDa subunits with no known regulatory components (). AC-IV forms a superfamily with mamallian thiamine-triphosphatase called CYTH (CyaB, thiamine triphosphatase). Classes V and VI. These forms of AC have been reported in specific bacteria ("Prevotella ruminicola" and "Rhizobium etli" , respectively) and have not been extensively characterized. There are a few extra members (~400 in Pfam) known to be in class VI. Class VI enzymes possess a catalytic core similar to the one in Class III.
2529
Alexandra
Alexandra () is the feminine form of the given name Alexander (, ). Etymologically, the name is a compound of the Greek verb (; meaning 'to defend') and ( , ; meaning 'man'). Thus it may be roughly translated as "defender of man" or "protector of man". The name Alexandra was one of the epithets given to the Greek goddess Hera and as such is usually taken to mean "one who comes to save warriors". The earliest attested form of the name is the Mycenaean Greek ( or //), written in the Linear B syllabic script. Alexandra and its masculine equivalent, Alexander, are both common names in Greece as well as countries where Germanic, Romance, and Slavic languages are spoken.
2536
Articolo 31
Articolo 31 is a band from Milan, Italy, formed in 1990 by J-Ax and DJ Jad, combining hip hop, funk, pop and traditional Italian musical forms. They are one of the most popular Italian hip hop groups. Band history. Articolo 31 were formed by rapper J-Ax (real name Alessandro Aleotti) and DJ Jad (Vito Luca Perrini). In the spoken intro of the album "Strade di Città" ("City Streets"), it is stated that the band is named after the article of the Irish constitution guaranteeing freedom of the press, although article 31 of the Irish constitution is not about the freedom of the press. They probably meant the Section 31 of the Broadcasting Authority Act. Articolo 31 released one of the first Italian hip hop records, "Strade di città", in 1993. Soon, they signed with BMG Ricordi and started to mix rap with pop music – a move that earned them great commercial success but that alienated the underground hip hop scene, who perceived them as traitors. In 1997, DJ Gruff dissed Articolo 31 in a track titled "1 vs 2" on the first album of the beatmaker Fritz da Cat, starting a feud that would go on for years. In 2001, Articolo 31 collaborated with the American old school rapper Kurtis Blow on the album "XChé SI!". In the same year, they made the film "Senza filtro" (in English, "Without filter"). Their producer was Franco Godi, who also produced the music for the "Signor Rossi" animated series. Their 2002 album "Domani smetto" represented a further departure from hip hop, increasingly relying on the formula of rapping over pop music samples. Several of their songs rotate around the theme of soft drugs legalization in Italy (pointing strongly in favour). Following their 2003 album "Italiano medio", the band took a break. Both J-Ax and DJ Jad have been involved with solo projects. In 2006, the group declared an indefinite hiatus. Their posse, "Spaghetti Funk", includes other popular performers like Space One and pop rappers Gemelli DiVersi. On 4 December 2022, it was officially announced Articolo 31 participation in the Sanremo Music Festival 2023. "Un bel viaggio" was later announced as their entry for the Sanremo Music Festival 2023.
2543
Alexander Kerensky
Alexander Fyodorovich Kerensky ( – 11 June 1970) was a Russian lawyer and revolutionary who led the Russian Provisional Government and the short-lived Russian Republic for three months from late July to early November 1917. After the February Revolution of 1917, he joined the newly formed provisional government, first as Minister of Justice, then as Minister of War, and after July as the government's second Minister-Chairman. He was the leader of the social-democratic Trudovik faction of the Socialist Revolutionary Party. Kerensky was also a vice-chairman of the Petrograd Soviet, a position that held a sizable amount of power. Kerensky became the prime minister of the Provisional Government, and his tenure was consumed with World War I. Despite mass opposition to the war, Kerensky chose to continue Russia's participation. His government cracked down on anti-war sentiment and dissent in 1917, which made his administration even more unpopular. Kerensky remained in power until the October Revolution. This revolution saw the Bolsheviks replace his government with a Marxist one, led by Vladimir Lenin. Kerensky fled Russia and lived the remainder of his life in exile. He divided his time between Paris and New York City. Kerensky worked for the conservative Hoover Institution at Stanford University. Biography. Early life and activism. Alexander Kerensky was born in Simbirsk (now Ulyanovsk) on the Volga River on 4 May 1881 and was the eldest son in the family. His father, Fyodor Mikhailovich Kerensky, was a teacher and director of the local gymnasium and was later promoted to be an inspector of public schools. His paternal grandfather Mikhail Ivanovich served as a priest in the village of Kerenka in the Gorodishchensky district of the Penza province from 1830. The surname Kerensky comes from the name of this village. His maternal grandfather was head of the Topographical Bureau of the Kazan Military District. His mother, Nadezhda Aleksandrovna (née Adler), was the granddaughter of a former serf who had managed to purchase his freedom before serfdom was abolished in 1861. He subsequently embarked upon a mercantile career, in which he prospered. This allowed him to move his business to Moscow, where he continued his success and became a wealthy Moscow merchant. Kerensky's father was the teacher of Vladimir Ulyanov (Lenin), and members of the Kerensky and Ulyanov families were friends. In 1889, when Kerensky was eight, the family moved to Tashkent, where his father had been appointed the main inspector of public schools (superintendent). Alexander graduated with honours in 1899. The same year he entered St. Petersburg University, where he studied history and philology. The next year he switched to law. He earned his law degree in 1904 and married Olga Lvovna Baranovskaya, the daughter of a Russian general, the same year. Kerensky joined the Narodnik movement and worked as a legal counsel to victims of the Revolution of 1905. At the end of 1904, he was jailed on suspicion of belonging to a militant group. Afterwards, he gained a reputation for his work as a defence lawyer in a number of political trials of revolutionaries. In 1912, Kerensky became widely known when he visited the goldfields at the Lena River and published material about the Lena Minefields incident. In the same year, Kerensky was elected to the Fourth Duma as a member of the Trudoviks, a socialist, non-Marxist labour party founded by Alexis Aladin that was associated with the Socialist-Revolutionary Party, and joined a Freemason society uniting the anti-monarchy forces that strived for democratic renewal of Russia. In fact, the Socialist Revolutionary Party bought Kerensky a house, as he otherwise would not be eligible for election to the Duma, according to the Russian property-laws. He soon became a significant member of the Progressive Bloc, which included several Socialist Parties, Mensheviks, and Liberals – but not Bolsheviks. He was a brilliant orator and skilled parliamentary leader of the socialist opposition to the government of Tsar Nicholas II. During the 4th Session of the Fourth Duma in spring 1915, Kerensky appealed to Rodzianko with a request from the Council of elders to inform the Tsar that to succeed in the war he must: 1) change his domestic policy, 2) proclaim a General Amnesty for political prisoners, 3) restore the Constitution of Finland, 4) declare autonomy of Poland, 5) provide national minorities autonomy in the field of culture, 6) abolish restrictions against Jews, 7) end religious intolerance, 8) stop the harassment of legal trade union organizations. Kerensky was an active member of the irregular Freemasonic lodge, the Grand Orient of Russia's Peoples, which derived from the Grand Orient of France. Kerensky was Secretary-General of the Grand Orient of Russia's Peoples and stood down following his ascent to the government in July 1917. He was succeeded by a Menshevik, Alexander Halpern. Rasputin. In response to bitter resentments held against the imperial favourite Grigori Rasputin in the midst of Russia's failing effort in World War I, Kerensky, at the opening of the Duma on 2 November 1916, called the imperial ministers "hired assassins" and "cowards", and alleged that they were "guided by the contemptible Grishka Rasputin!" Grand Duke Nikolai Mikhailovich, Prince Lvov, and general Mikhail Alekseyev attempted to persuade the emperor Nicholas II to send away the Empress Alexandra Feodorovna, Rasputin's steadfast patron, either to the Livadia Palace in Yalta or to Britain. Mikhail Rodzianko, Zinaida Yusupova (the mother of Felix Yusupov), Alexandra's sister Elisabeth, Grand Duchess Victoria and the empress's mother-in-law Maria Feodorovna also tried to influence and pressure the imperial couple to remove Rasputin from his position of influence within the imperial household, but without success. According to Kerensky, Rasputin had terrorised the empress by threatening to return to his native village. Members of the nobility murdered Rasputin in December 1916, burying him near the imperial residence in Tsarskoye Selo. Shortly after the February Revolution of 1917, Kerensky ordered soldiers to re-bury the corpse at an unmarked spot in the countryside. However, the truck broke down or was forced to stop because of the snow on Lesnoe Road outside of St. Petersburg. It is likely the corpse was incinerated (between 3 and 7 in the morning) in the cauldrons of the nearby boiler shop of the Saint Petersburg State Polytechnical University, including the coffin, without leaving a single trace. Russian Provisional Government of 1917. When the February Revolution broke out in 1917, Kerensky – together with Pavel Milyukov – was one of its most prominent leaders. As one of the Duma's most well-known speakers against the monarchy and as a lawyer and defender of many revolutionaries, Kerensky became a member of the Provisional Committee of the State Duma and was elected vice-chairman of the newly formed Petrograd Soviet. These two bodies, the Duma and the Petrograd Soviet, or – rather – their respective executive committees, soon became each other's antagonists on most matters except regarding the end of the Tsar's autocracy. The Petrograd Soviet grew to include 3000 to 4000 members, and their meetings could drown in a blur of everlasting orations. At the meeting of to the Executive Committee of the Petrograd Soviet, or Ispolkom, formed a self-appointed committee, with (eventually) three members from each of the parties represented in the Soviet. Kerensky became one of the members representing the Social Revolutionary party (the SRs). On , without any consultation with the government, the Ispolkom of the Soviet issued the infamous Order No. 1, intended only for the 160,000-strong Petrograd garrison, but soon interpreted as applicable to all soldiers at the front. The order stipulated that all military units should form committees like the Petrograd Soviet. This led to confusion and "stripping of officers' authority"; further, "Order No. 3" stipulated that the military was subordinate to Ispolkom in the political hierarchy. The ideas came from a group of Socialists and aimed to limit the officers' power to military affairs. The socialist intellectuals believed the officers to be the most likely counterrevolutionary elements. Kerensky's role in these orders is unclear, but he participated in the decisions. But just as before the revolution he had defended many who disliked the Tsar, he now saved the lives of many of the Tsar's civil servants about to be lynched by mobs. Additionally, the Duma formed an executive committee which eventually became the Russian Provisional Government. As there was little trust between Ispolkom and this Government (and as he was about to accept the office of Attorney General in the Provisional Government), Kerensky gave a most passionate speech, not just to the Ispolkom, but to the entire Petrograd Soviet. He then swore, as Minister, never to violate democratic values, and ended his speech with the words "I cannot live without the people. In the moment you begin to doubt me, then kill me." The huge majority (workers and soldiers) gave him great applause, and Kerensky now became the first and "the only one" who participated in both the Provisional Government and the Ispolkom. As a link between Ispolkom and the Provisional Government, the quite ambitious Kerensky stood to benefit from this position. After the first government crisis over Pavel Milyukov's secret note re-committing Russia to its original war-aims on 2–4 May, Kerensky became the Minister of War and the dominant figure in the newly formed socialist-liberal coalition government. On 10 May (Julian calendar), Kerensky started for the front and visited one division after another, urging the men to do their duty. His speeches were impressive and convincing for the moment, but had little lasting effect. Under Allied pressure to continue the war, he launched what became known as the Kerensky Offensive against the Austro-Hungarian/German South Army on . At first successful, the offensive soon met strong resistance and the Central Powers riposted with a strong counter-attack. The Russian army retreated and suffered heavy losses, and it became clear from many incidents of desertion, sabotage, and mutiny that the army was no longer willing to attack. The military heavily criticised Kerensky for his liberal policies, which included stripping officers of their mandates and handing over control to revolutionary-inclined "soldier committees" () instead; abolition of the death penalty; and allowing revolutionary agitators to be present at the front. Many officers jokingly referred to commander-in-chief Kerensky as the "persuader-in-chief" On 2 July 1917 the Provisional Government's first coalition collapsed over the question of Ukraine's autonomy. Following the July Days unrest in Petrograd (3–7 July [16–20 July, N.S.] 1917) and the official suppression of the Bolsheviks, Kerensky succeeded Prince Lvov as Russia's Prime Minister on . Following the Kornilov Affair, an attempted military coup d'état at the end of August, and the resignation of the other ministers, he appointed himself Supreme Commander-in-Chief as well. On 15 September Kerensky proclaimed Russia a republic, which was contrary to the non-socialists' understanding that the Provisional Government should hold power only until a Constituent Assembly should meet to decide Russia's form of government, but which was in line with the long-proclaimed aim of the Socialist Revolutionary Party. He formed a five-member Directory, which consisted of himself, Minister of Foreign Affairs Mikhail Tereshchenko, Minister of War General Aleksandr Verkhovsky, Minister of the Navy Admiral Dmitry Verderevsky and Minister of Posts and Telegraphs . He retained his post in the final coalition government in October 1917 until the Bolsheviks overthrew it on . Kerensky faced a major challenge: three years of participation in World War had exhausted Russia, while the provisional government offered little motivation for a victory outside of continuing Russia's obligations towards its allies. Russia's continued involvement in the war was not popular among the lower and middle classes, and especially not popular among the soldiers. They had all believed that Russia would stop fighting when the Provisional Government took power, and subsequently felt deceived. Furthermore, Vladimir Lenin and his Bolshevik party were promising "peace, land, and bread" under a communist system. The Russian army, war-weary, ill-equipped, dispirited and ill-disciplined, was disintegrating, with soldiers deserting in large numbers. By autumn 1917, an estimated two million men had unofficially left the army. Kerensky and other political leaders continued Russia's involvement in World War I, thinking that a glorious victory was the only road forward, and fearing that the economy, already under huge stress from the war effort, might become increasingly unstable if vital supplies from France and from the United Kingdom ceased flowing. The dilemma of whether to withdraw was a great one, and Kerensky's inconsistent and impractical policies further destabilised the army and the country at large. Furthermore, Kerensky adopted a policy that isolated the right-wing conservatives, both democratic and monarchist-oriented. His philosophy of "no enemies to the left" greatly empowered the Bolsheviks and gave them a free hand, allowing them to take over the military arm or "voyenka" () of the Petrograd and Moscow Soviets. His arrest of Lavr Kornilov and other officers left him without strong allies against the Bolsheviks, who ended up being Kerensky's strongest and most determined adversaries, as opposed to the right wing, which evolved into the White movement. October Revolution of 1917. During the Kornilov Affair, Kerensky had distributed arms to the Petrograd workers, and by November most of these armed workers had gone over to the Bolsheviks. On 1917, the Bolsheviks launched the second Russian revolution of the year. Kerensky's government in Petrograd had almost no support in the city. Only one small force, a subdivision of the 2nd company of the First Petrograd Women's Battalion, also known as The Women's Death Battalion, was willing to fight for the government against the Bolsheviks, but this force was overwhelmed by the numerically superior pro-Bolshevik forces, defeated, and captured. The Bolsheviks overthrew the government rapidly by seizing governmental buildings and the Winter Palace. Kerensky escaped the Bolsheviks and fled to Pskov, where he rallied some loyal troops for an attempt to re-take the city. His troops managed to capture Tsarskoye Selo but were beaten the next day at Pulkovo. Kerensky narrowly escaped, and he spent the next few weeks in hiding before fleeing the country, eventually arriving in France. During the Russian Civil War, he supported neither side, as he opposed both the Bolshevik regime and the White Movement. Personal life. Kerensky was married to Olga Lvovna Baranovskaya and they had two sons, Oleg (1905–1984) and Gleb (1907–1990), who both went on to become engineers. Kerensky's grandson (also named Oleg), according to IMDb.com, played his grandfather's role in the 1981 film "Reds". Kerensky and Olga were divorced in 1939 soon after he settled in Paris, and in 1939 while visiting the United States he met and married Lydia Ellen "Nell" Tritton (1899–1946), the Australian former journalist who had become his press secretary and translator. The marriage took place in Martins Creek, Pennsylvania. When Germany invaded France in 1940, they emigrated to the United States. After the Axis invasion of the Soviet Union in 1941, Kerensky offered his support to Joseph Stalin. When his wife Nell became terminally ill in 1945, Kerensky travelled with her to Brisbane, Australia, and lived there with her family. She suffered a stroke in February 1946, and he remained there until her death on 10 April 1946. Kerensky then returned to the United States, where he spent the rest of his life. Kerensky eventually settled in New York City, living on the Upper East Side on 91st Street near Central Park but spent much of his time at the Hoover Institution at Stanford University in California, where he both used and contributed to the Institution's huge archive on Russian history, and where he taught graduate courses. He wrote and broadcast extensively on Russian politics and history. His last public lecture was delivered at Kalamazoo College in Kalamazoo, Michigan, in October 1967. Death. Kerensky died of arteriosclerotic heart disease at St. Luke's Hospital in New York City on 11 June 1970 after being initially admitted from injuries sustained in a fall. At 89, he was one of the last surviving major participants in the turbulent events of 1917. The local Russian Orthodox Churches in New York City refused to grant Kerensky burial rites because of his association with Freemasonry, and because they saw him as largely responsible for the Bolsheviks seizing power. A Serbian Orthodox Church also refused burial rites. Kerensky's body was flown to London, where he was buried at the non-denominational Putney Vale Cemetery. Archives. Papers of the Kerensky family are held at the Cadbury Research Library, University of Birmingham.
2544
Ansgar
Ansgar (8 September 801 – 3 February 865), also known as Anskar, Saint Ansgar, Saint Anschar or Oscar, was Archbishop of Hamburg-Bremen in the northern part of the Kingdom of the East Franks. Ansgar became known as the "Apostle of the North" because of his travels and the See of Hamburg received the missionary mandate to bring Christianity to Northern Europe. Life. Ansgar was the son of a noble Frankish family, born near Amiens (present day France). After his mother's early death, Ansgar was brought up in Benedictine monastery of Corbie in Picardy. According to the "Vita Ansgarii" ("Life of Ansgar"), when the little boy learned in a vision that his mother was in the company of Mary, mother of Jesus, his careless attitude toward spiritual matters changed to seriousness. His pupil, successor, and eventual biographer Rimbert considered the visions (of which this was the first) to have been Ansgar's main life motivator. Ansgar acted in the context of the phase of Christianization of Saxony (present day Northern Germany) begun by Charlemagne and continued by Charlemagne's son and successor, Louis the Pious. In 822 Ansgar became one of many missionaries sent to found the abbey of Corvey (New Corbie) in Westphalia, where he became a teacher and preacher. A group of monks including Ansgar were sent further north to Jutland with the king Harald Klak, who had received baptism during his exile. With Harald's downfall in 827 and Ansgar's companion Autbert having died, their school for the sons of courtiers closed and Ansgar returned to Germany. Then in 829, after the Swedish king Björn at Hauge requested missionaries for his Swedes, King Louis sent Ansgar, now accompanied by friar Witmar from New Corbie as his assistant. Ansgar preached and made converts, particularly during six months at Birka, on Lake Mälaren, where the wealthy widow Mor Frideborg extended hospitality. Ansgar organized a small congregation with her and the king's steward, Hergeir, as its most prominent members. In 831 Ansgar returned to Louis' court at Worms and was appointed to the Archbishopric of Hamburg-Bremen. This was a new archbishopric, incorporating the bishoprics of Bremen and Verden and with the right to send missions into all the northern lands, as well as to consecrate bishops for them. Ansgar received the mission of evangelizing pagan Denmark, Norway and Sweden. The King of Sweden decided to cast lots as to whether to admit the Christian missionaries into his kingdom. Ansgar recommended the issue to the care of God, and the lot was favorable. Ansgar was consecrated as a bishop in November 831, with the approval of Gregory IV. Before traveling north once again, Ansgar traveled to Rome to receive the pallium directly from the pope's hands, and was formally named legate for the northern lands. Ebbo, Archbishop of Reims had previously received a similar commission, but would be deposed twice before his death in 851, and never actually traveled so far north, so the jurisdiction was divided by agreement, with Ebbo retaining Sweden for himself. For a time Ansgar devoted himself to the needs of his own diocese, which was still a missionary territory and had few churches. He founded a monastery and a school in Hamburg. Although intended to serve the Danish mission further north, it accomplished little. After Louis the Pious died in 840, his empire was divided and Ansgar lost the abbey of Turholt, which Louis had given to endow Ansgar's work. Then in 845, the Danes unexpectedly raided Hamburg, destroying all the church's treasures and books. Ansgar now had neither see nor revenue, and many helpers deserted him. The new king, Louis' third son, Louis the German, did not re-endow Turholt to Ansgar, but in 847 he named the missionary to the vacant diocese of Bremen, where Ansgar moved in 848. However, since Bremen had been suffragan to the Bishop of Cologne, combining the sees of Bremen and Hamburg presented canonical difficulties. After prolonged negotiations, Pope Nicholas I would approve the union of the two dioceses in 864. Through this political turmoil, Ansgar continued his northern mission. The Danish civil war compelled him to establish good relations with two kings, Horik the Elder and his son, Horik II. Both assisted him until his death; Ansgar was able to secure permission to build a church in Sleswick north of Hamburg and recognition of Christianity as a tolerated religion. Ansgar did not forget the Swedish mission, and spent two years there in person (848–850), averting a threatened pagan reaction. In 854, Ansgar returned to Sweden when king Olof ruled in Birka. According to Rimbert, he was well disposed to Christianity. On a Viking raid to Apuole (current village in Lithuania) in Courland, the Swedes plundered the Curonians. Death and legacy. Ansgar was buried in Bremen in 865. His successor as archbishop, Rimbert, wrote the "Vita Ansgarii". He noted that Ansgar wore a rough hair shirt, lived on bread and water, and showed great charity to the poor. Adam of Bremen attributed the "Vita et miracula of Willehad" (first bishop of Bremen) to Ansgar in "Gesta Hammenburgensis ecclesiæ"; Ansgar is also the reputed author of a collection of brief prayers "Pigmenta" (ed. J. M. Lappenberg, Hamburg, 1844). Pope Nicholas I declared Ansgar a saint shortly after the missionary's death. The first actual missionary in Sweden and the Nordic countries (and organizer of the Catholic church therein), Ansgar was later declared "Patron of Scandinavia". Relics are located in Hamburg in two places: St. Mary's Cathedral (Ger.: Domkirche St. Marien) and St. Ansgar's and St. Bernard's Church (Ger.: St. Ansgar und St. Bernhard Kirche). Statues of Bishop Ansgar stand in Hamburg, Copenhagen and Ribe, as well as a stone cross at Birka. His feast day (Lesser Festival) is 3 February, as it is in the Church of England and the Episcopal Church. Visions. Although a historical document and primary source written by a man whose existence can be proven historically, the "Vita Ansgarii" ("The Life of Ansgar") aims above all to demonstrate Ansgar's sanctity. It is partly concerned with Ansgar's visions, which, according to the author Rimbert, encouraged and assisted Ansgar's remarkable missionary feats. Through the course of this work, Ansgar repeatedly embarks on a new stage in his career following a vision. According to Rimbert, his early studies and ensuing devotion to the ascetic life of a monk were inspired by a vision of his mother in the presence of Mary, mother of Jesus. Again, when the Swedish people were left without a priest for some time, he begged King Horik to help him with this problem; then after receiving his consent, consulted with Bishop Gautbert to find a suitable man. The two together sought the approval of King Louis, which he granted when he learned that they were in agreement on the issue. Ansgar was convinced he was commanded by heaven to undertake this mission and was influenced by a vision he received when he was concerned about the journey, in which he met a man who reassured him of his purpose and informed him of a prophet that he would meet, the abbot Adalhard, who would instruct him in what was to happen. In the vision, he searched for and found Adalhard, who commanded, "Islands, listen to me, pay attention, remotest peoples", which Ansgar interpreted as God's will that he go to the Scandinavian countries as "most of that country consisted of islands, and also, when 'I will make you the light of the nations so that my salvation may reach to the ends of the earth' was added, since the end of the world in the north was in Swedish territory". External links. <br>
2546
Automated theorem proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science. Logical foundations. While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's "Begriffsschrift" (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His "Foundations of Arithmetic", published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential "Principia Mathematica", first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published "On Formally Undecidable Propositions of Principia Mathematica and Related Systems" (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions. First implementations. Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Institute for Advanced Study in Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the "Principia Mathematica", developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the "Principia". The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious. Decidability of the problem. Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, "invalid" formulas (those that are "not" entailed by a given theory), cannot always be recognized. The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers). Related problems. A simpler, but related, problem is "proof verification", where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by the first player. Industrial uses. Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors. First-order theorem proving. In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling "fully" automated systems. More expressive logics, such as Higher-order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed. Benchmarks, competitions, and sources. The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above.
2547
Agent Orange
Agent Orange is a chemical herbicide and defoliant, one of the tactical use Rainbow Herbicides. It was used by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. In addition to its damaging environmental effects, traces of dioxin (mainly TCDD, the most toxic of its type) found in the mixture have caused major health problems for many individuals who were exposed, and their offspring. Agent Orange was produced in the United States from the late 1940s and was used in industrial agriculture, and was also sprayed along railroads and power lines to control undergrowth in forests. During the Vietnam War, the U.S. military procured over 20 million gallons (75 million liters), consisting of a fifty-fifty mixture of 2,4-D and dioxin-contaminated 2,4,5-T. Nine chemical companies produced it: Dow Chemical Company, Monsanto Company, Diamond Shamrock Corporation, Hercules Inc., Thompson Hayward Chemical Co., United States Rubber Company (Uniroyal), Thompson Chemical Co., Hoffman-Taff Chemicals, Inc., and Agriselect. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Vietnamese Red Cross estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable, while documenting cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed U.S. military veterans. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel as a result of Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over 3,100,000 hectares (31,000 km2 or 11,969 mi2) of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity is sharply reduced in contrast with unsprayed areas. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages. Agent Orange was first used by the British Armed Forces in Malaya during the Malayan Emergency. It was also used by the U.S. military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. Chemical composition. The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-"p"-dioxin (TCDD). TCDD was a trace (typically 2-3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange. Toxicology. TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to enter the body readily through physical contact or ingestion. Dioxins accumulate easily in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the nucleus, where it influences gene expression. According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful. Development. Several herbicides were developed as part of efforts by the United States and the United Kingdom to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33). In 1943, the United States Department of the Army contracted botanist (and later bioethicist) Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois Urbana-Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued. In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S. testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly. Early use. In Malaya the local unit of Imperial Chemical Industries researched defoliants as weed killers for rubber plantations. Roadside ambushes by the Malayan National Liberation Army were a danger to the British military during the Malayan Emergency (1948–1960) so trials were made to defoliate vegetation that might hide ambush sites, but hand removal was found cheaper. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E.K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H.G.H. Kearns of the University of Bristol. After the Malayan Emergency ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya. Use in the Vietnam War. In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to help defoliate the lush jungle that was providing cover to his Communist enemies. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. Many U.S. officials supported herbicide operations, pointing out that the British had already used herbicides and defoliants in Malaya during the 1950s. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam. The herbicide operations were formally directed by the government of South Vietnam. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters and possible ambush sites along roads and canals. Samuel P. Huntington argued that the program was also a part of a policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers. Altogether, over 80 million litres of Agent Orange were applied. The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA. The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period. 3.2% of South Vietnam's cultivated land was sprayed at least once between 1965 and 1971. 90% of herbicide use was directed at defoliation. The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. In 1965, members of the U.S. Congress were told, "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." The first official acknowledgment of the programs came from the State Department in March 1966. When crops were destroyed, the Viet Cong would compensate for the loss of food by confiscating more food from local villages. Some military personnel reported being told they were destroying crops used to feed guerrillas, only to later discover, most of the destroyed food was actually produced to support the local civilian population. For example, according to Wil Verwey, 85% of the crop lands in Quang Ngai province were scheduled to be destroyed in 1970 alone. He estimated this would have caused famine and left hundreds of thousands of people without food or malnourished in the province. According to a report by the American Association for the Advancement of Science, the herbicide campaign had disrupted the food supply of more than 600,000 people by 1970. Many experts at the time, including Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then the United Kingdom and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, the United Kingdom commented on the draft Resolution 2603 (XXIV): "The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law." The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. A study carried out by the Bionetic Research Laboratories between 1965 and 1968 found malformations in test animals caused by 2,4,5-T, a component of Agent Orange. The study was later brought to the attention of the White House in October 1969. Other studies reported similar results and the Department of Defense began to reduce the herbicide operation. On April 15, 1970, it was announced that the use of Agent Orange was suspended. Two brigades of the Americal Division in the summer of 1970 continued to use Agent Orange for crop destruction in violation of the suspension. An investigation led to disciplinary action against the brigade and division commanders because they had falsified reports to hide its use. Defoliation and crop destruction were completely stopped by June 30, 1971. Health effects. There are various types of cancer associated with Agent Orange, including chronic B-cell leukemia, Hodgkin's lymphoma, multiple myeloma, non-Hodgkin's lymphoma, prostate cancer, respiratory cancer, lung cancer, and soft tissue sarcomas. Vietnamese people. The government of Vietnam states that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to Agent Orange contamination. The United States government has challenged these figures as being unreliable. According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases. In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies. Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate. U.S. veterans. While in Vietnam, US-allied soldiers were told not to worry about agent orange and were persuaded the chemical was harmless. After returning home, Vietnam veterans began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge. In order to qualify for compensation, veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S. By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam. In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam and 51% said they supported compensation for Vietnamese Agent Orange victims. National Academy of Medicine. Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled "Veterans and Agent Orange: Update 2014." The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange. The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence." At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report "Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft" confirmed it. U.S. Public Health Service. Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment. Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims. Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-"p"-dioxin. U.S. Veterans of Laos and Cambodia. During the Vietnam War, the United States fought the North Vietnamese, and their allies, in Laos and Cambodia, including heavy bombing campaigns. They also sprayed large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped 475,500 gallons (1.8 million liters) of Agent Orange in Laos and 40,900 gallons (155,000 L) in Cambodia. Because Laos and Cambodia were both officially neutral during the Vietnam War, the U.S. attempted to keep secret its military operations in those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. One noteworthy exception, according to the U.S. Department of Labor, is a claim filed with the CIA by an employee of "a self-insured contractor to the CIA that was no longer in business." The CIA advised the Department of Labor that it "had no objections" to paying the claim and Labor accepted the claim for payment: Ecological impact. About 17.8%——of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were, respectively, 145 and 170 species of birds and 30 and 55 species of mammals. Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases. Sociopolitical impact. American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's "Memorandum 5446-ISA/ARPA" states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange and areas were bulldozed clear of vegetation forcing many rural civilians to cities. Legal and diplomatic proceedings. International. The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. Article 2(4) of Protocol III of the Convention on Certain Conventional Weapons contains the "Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange, and it has been argued that it was clearly designed to cover situations like U.S. tactics in Vietnam. Class action lawsuit. Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government. In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction. The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700. In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects." New Jersey Agent Orange Commission. In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine trace dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the exposed group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Army, Marines and brown water riverboat Navy personnel. U.S. Congress. In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA expired in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included. In 2011, an appraisal of the 20-year long "Air Force Health Study" that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange". The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses. U.S.–Vietnamese government negotiations. In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held. Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin. A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011. Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea. Vietnamese victims class action lawsuit in U.S. courts. On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. In the dismissal statement issued by Weinstein, he wrote "The prohibition extended only to gases deployed for their asphyxiating or toxic effects on man, not to herbicides designed to affect plants that may have unintended harmful side-effects on people." Author and activist George Jackson had written previously that "if the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to the United Kingdom's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare." The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity. The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals. Help for those affected in Vietnam. To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi. The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange. The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need. Vuong Mo of the Vietnam News Agency described one of the centers: May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful. On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy. Use outside of Vietnam. Australia. In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state. Canada. The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence. In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe. On February 17, 2011, the "Toronto Star" revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The "Toronto Star" reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the "Toronto Star" article, the Ontario provincial government launched a probe into the use of Agent Orange. Guam. An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Andersen Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam. Korea. Agent Orange was used in Korea in the late 1960s. In 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims". In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 the United States Army had buried 250 drums of Agent Orange in Camp Carroll, the U.S. Army base in Gyeongsangbuk-do, Korea. Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in Korea and are able to prove they were assigned to certain specified around the DMZ during a specific time frame are afforded similar presumption. New Zealand. The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of herbicide used in Agent Orange which has been alleged at various times to have been exported for use in the Vietnam War and to other users by the Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted. However, the agriscience company Corteva (which split from DowDupont in 2019) agreed to clean up the Paritutu site in September 2022. There are cases of New Zealand soldiers developing cancers such as bone cancer, but none has been scientifically connected to exposure to herbicides. Philippines. Herbicide persistence studies of Agents Orange and White were conducted in the Philippines. Johnston Atoll. The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship MV Transpacific for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi. Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport. From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship . As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat. Okinawa, Japan. There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013. In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa. Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa. Thailand. Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits. A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides. United States. The University of Hawaii has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Kaua'i Island in 1967–68 and on Hawaii Island in 1966; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs. In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015. In 1978, the EPA suspended spraying of Agent Orange in National Forests. Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were sprayed with Agent Orange along power lines throughout the National Forest. In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, contaminating a 17-mile stretch of riverbed in one of New Jersey's most populous areas. A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, as well as in Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s. Cleanup programs. In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees. On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Danang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai—a hotspot for dioxin—and Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper "Nhân Dân", the U.S. government provided $41 million to the project. As of 2017, some 110,000 cubic meters of soil have been cleaned. The Seabee's Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was 30 odd acres in size and was still being cleaned up in 2013. In 2016, the EPA laid out its plan for cleaning up an 8-mile stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over 1,000 °C (1832 °F), the destruction process is energy intensive.
2551
Astronomical year numbering
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007). The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year "n" BC/BCE is numbered "−("n" − 1)" (a negative number equal to 1 − "n"). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general "n" AD/CE is simply "n" or +"n". For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other. The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred. Usage of the year zero. In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled "Christi" (Christ's) between years labeled "Ante Christum" (Before Christ) and "Post Christum" (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled "ante Christum" (BC), and immediately before years labeled "post Christum" (AD) on the mean motion pages in his "Tabulæ Astronomicæ", thus adding the designation "0" to Kepler's "Christi". Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his "Tables astronomiques", simply labeling this year "0", which he placed at the end of Julian years labeled "avant Jesus-Christ" (before Jesus Christ or BC), and immediately before Julian years labeled "après Jesus-Christ" (after Jesus Christ or AD). Cassini gave the following reasons for using a year 0: Fred Espenak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation: Signed years without the year zero. Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel (1890–1967) used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He may have done so to save space and he put no year 0 between them. Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility.
2552
Adam of Bremen
Adam of Bremen (; ; before 1050 – 12 October 1081/1085) was a German medieval chronicler. He lived and worked in the second half of the eleventh century. Adam is most famous for his chronicle "Gesta Hammaburgensis ecclesiae pontificum" ("Deeds of Bishops of the Hamburg Church"). He was "one of the foremost historians and early ethnographers of the medieval period". In his chronicle, he included a chapter mentioning the Norse outpost of Vinland, and was thus the first continental European to write about the New World. Life. Little is known of his life other than hints from his own chronicles. He is believed to have come from Meissen, then its own margravate. The dates of his birth and death are uncertain, but he was probably born before 1050 and died on 12 October of an unknown year (possibly 1081, at the latest 1085). From his chronicles, it is apparent that he was familiar with a number of authors. The honorary name of "Magister Adam" shows that he had passed through all the stages of a higher education. It is probable that he was taught at the "Magdeburger Domschule". In 1066 or 1067, he was invited by Archbishop to join the Church of Bremen. Adam was accepted among the capitulars of Bremen, and by 1069 he appeared as director of the Bremen Cathedral's school. Soon thereafter he began to write the history of Bremen/Hamburg and of the northern lands in his "Gesta". His position and the missionary activity of the church of Bremen allowed him to gather information on the history and the geography of Northern Germany. A stay at the court of Sweyn II of Denmark gave him the opportunity to find information about the history and geography of Denmark and the other Scandinavian countries. Among other things he wrote about in Scandinavia were the sailing passages across Øresund such as today's Helsingør–Helsingborg ferry route.
2553
Ab urbe condita
Ab urbe condita ( 'from the founding of the City'), or anno urbis conditae (; 'in the year since the city's founding'), abbreviated as AUC or AVC, expresses a date in years since 753 BC, the traditional founding of Rome. It is an expression used in antiquity and by classical historians to refer to a given year in Ancient Rome. In reference to the traditional year of the foundation of Rome, the year 1 BC would be written AUC 753, whereas AD 1 would be AUC 754. The foundation of the Roman Empire in 27 BC would be AUC 727. The current year AD  would be AUC . Usage of the term was more common during the Renaissance, when editors sometimes added AUC to Roman manuscripts they published, giving the false impression that the convention was commonly used in antiquity. In reality, the dominant method of identifying years in Roman times was to name the two consuls who held office that year. In late antiquity, regnal years were also in use, as in Roman Egypt during the Diocletian era after AD 293, and in the Byzantine Empire from AD 537, following a decree by Justinian. Significance. The traditional date for the founding of Rome, 21 April 753 BC, is due to Marcus Terentius Varro (1st century BC). Varro may have used the consular list (with its mistakes) and called the year of the first consuls ""ab Urbe condita" 245," accepting the 244-year interval from Dionysius of Halicarnassus for the kings after the foundation of Rome. The correctness of this calculation has not been confirmed, but it is still used worldwide. From the time of Claudius (fl. AD 41 to AD 54) onward, this calculation superseded other contemporary calculations. Celebrating the anniversary of the city became part of imperial propaganda. Claudius was the first to hold magnificent celebrations in honor of the anniversary of the city, in AD 48, the eight hundredth year from the founding of the city. Hadrian, in AD 121, and Antoninus Pius, in AD 147 and AD 148, held similar celebrations respectively. In AD 248, Philip the Arab celebrated Rome's first millennium, together with Ludi saeculares for Rome's alleged tenth saeculum. Coins from his reign commemorate the celebrations. A coin by a contender for the imperial throne, Pacatianus, explicitly states "[y]ear one thousand and first," which is an indication that the citizens of the empire had a sense of the beginning of a new era, a "Sæculum Novum". Calendar era. The Anno Domini (AD) year numbering was developed by a monk named Dionysius Exiguus in Rome in , as a result of his work on calculating the date of Easter. Dionysius did not use the AUC convention, but instead based his calculations on the Diocletian era. This convention had been in use since AD 293, the year of the tetrarchy, as it became impractical to use regnal years of the current emperor. In his Easter table, the year was equated with the 248th regnal year of Diocletian. The table counted the years starting from the presumed birth of Christ, rather than the accession of the emperor Diocletian on 20 November AD 284 or, as stated by Dionysius: "sed magis elegimus ab incarnatione Domini nostri Jesu Christi annorum tempora praenotare" ("but rather we choose to name the times of the years from the incarnation of our Lord Jesus Christ"). Blackburn and Holford-Strevens review interpretations of Dionysius which place the Incarnation in 2 BC, 1 BC, or AD 1. The year AD 1 corresponds to AUC 754, based on the epoch of Varro. Thus:
2559
Arapaoa Island
Arapaoa Island (formerly spelled Arapawa Island) is an island located in the Marlborough Sounds, at the north east tip of the South Island of New Zealand. The island has a land area of . Queen Charlotte Sound defines its western side, while to the south lies Tory Channel, which is on the sea route between Wellington in the North Island to Picton. Cook Strait's narrowest point is between Arapaoa Island's Perano Head and Cape Terawhiti in the North Island. History. According to Māori oral tradition, the island was where the great navigator Kupe killed the octopus Te Wheke-a-Muturangi. It was from a hill on Arapaoa Island in 1770 that Captain James Cook first saw the sea passage from the Pacific Ocean to the Tasman Sea, which was named Cook Strait. This discovery banished the fond notion of geographers that there existed a great southern continent, Terra Australis. A monument at Cook's Lookout was erected in 1970. From the late 1820s until the mid-1960s, Arapaoa Island was a base for whaling in the Sounds. John Guard established a shore station at Te Awaiti in 1827, however initially could only salvage baleen until the station was equipped to process whale oil from 1830 onwards, targeting right whales. Later, the station at Perano Head on the east coast of the island was used to hunt humpback whales from 1911 to 1964 (see Whaling in New Zealand). The houses built by the Perano family are now operated as tourist accommodations. In the 2000s the former whalers from the Perano and Heberley families, who live on Arapawa, joined a Department of Conservation whale spotting programme to assess how the humpback whale population has recovered since the end of whaling. In August 2014, the spelling of the island's name was officially altered from "Arapawa" to "Arapaoa". Aircraft accident. The 11,000-volt power lines linking the mainland and Arapaoa Island over Tory Channel was struck by an Air Albatross Cessna 402 commuter aircraft in 1985. The crash was witnessed by many passengers on an inter-island Cook Strait ferry. The ferry immediately stopped to dispatch a rescue lifeboat. Along with the two pilots, one entire family died, and all but a young girl from the other. No bodies were ever found. The sole survivor (Cindy Mosey) was travelling with her family and the other family from Nelson to Wellington to attend a gymnastics competition. The Arapaoa Island crash caused public confidence in Air Albatross to falter, contributing to the company going into liquidation in December of that year. Conservation. Parts of the island have been heavily cleared of native vegetation in the past through burning and logging, A number of pine forests were planted on the island. Wilding pines, an invasive species in some parts of New Zealand, are being poisoned on the island to allow the regenerating native vegetation to grow. About at Ruaomoko Point on the south-eastern portion of the island will be killed by drilling holes into the trees and injecting poison. Arapaoa Island is known for the breeds of pigs, sheep and goats found only on the island. These became established in the 19th century, but the origin of these breeds is uncertain, and a matter of some speculation. Common suggestions are that they are old English breeds introduced by the early whalers, or by Captain Cook or other early explorers. These breeds are now extinct in England, and the goats surviving in a sanctuary on the island are now also bred in other parts of New Zealand and in the northern hemisphere. The small Brothers Islands, which lie off the northeast coast of Arapaoa Island, are a sanctuary for the rare Brothers Island tuatara.
2560
Administrative law
Administrative law is a division of law governing the activities of executive branch agencies of government. Administrative law includes executive branch rule making (executive branch rules are generally referred to as "regulations"), adjudication, and the enforcement of laws. Administrative law is considered a branch of public law. Administrative law deals with the decision-making of such administrative units of government that are part of the executive branch in such areas as international trade, manufacturing, the environment, taxation, broadcasting, immigration, and transport. Administrative law expanded greatly during the 20th century, as legislative bodies worldwide created more government agencies to regulate the social, economic and political spheres of human interaction. Civil law countries often have specialized administrative courts that review these decisions. In the last fifty years, administrative law, in many countries of the civil law tradition, has opened itself to the influence of rules posed by supranational legal orders, in which judicial principles have a strong importance: it has led, for one, to changes in some traditional concepts of the administrative law model, as has happened with the public procurements or with judicial control of administrative activity and, for another, has built a supranational or international public administration, as in the environmental sector or with reference to education, for which, within the United Nations' system, it has been possible to assist to a further increase of administrative structure devoted to coordinate the States' activity in that sector. In civil law countries. Unlike most common law jurisdictions, most civil law jurisdictions have specialized courts or sections to deal with administrative cases that as a rule apply procedural rules that are specifically designed for such cases and distinct from those applied in private law proceedings, such as contract or tort claims. Brazil. In Brazil, administrative cases are typically heard either by the Federal Courts (in matters concerning the Federal Union) or by the Public Treasury divisions of State Courts (in matters concerning the States). In 1998 a constitutional reform led by the government of President Fernando Henrique Cardoso introduced regulatory agencies as a part of the executive branch. Since 1988, Brazilian administrative law has been strongly influenced by the judicial interpretations of the constitutional principles of public administration (Art. 37 of Federal Constitution): legality, impersonality, publicity of administrative acts, morality and efficiency. Chile. In Chile the President of the Republic exercises the administrative function, in collaboration with several ministries or other authorities with "ministerial rank". Each ministry has one or more under-secretaries that act through public service to meet public needs. There is no single specialized court to deal with actions against the administrative entities, but there are several specialized courts and procedures of review. China. Administrative law in the China was virtually non-existent before the economic reform era initiated by Deng Xiaoping. Since the 1980s China has constructed a new legal framework for administrative law, establishing control mechanisms for overseeing the bureaucracy, and disciplinary committees for the Chinese Communist Party. However, many have argued that the usefulness of these laws is vastly inadequate in terms of controlling government actions, largely because of institutional and systemic obstacles like a weak judiciary, poorly trained judges and lawyers, and corruption. In 1990, the Administrative Supervision Regulations (行政检查条例) and the Administrative Reconsideration Regulations (行政复议条例) were passed. The 1993 State Civil Servant Provisional Regulations (国家公务员暂行条例) changed the way government officials were selected and promoted, requiring that they pass exams and yearly appraisals, and introducing a rotation system. The three regulations have been amended and upgraded into laws. In 1994, the State Compensation Law (国家赔偿法) was passed, followed by the Administrative Penalties Law (行政处罚法) in 1996. Administrative Compulsory Law was enforced in 2012. Administrative Litigation Law was amended in 2014. The General Administrative Procedure Law is underway. France. In France, there is a dual jurisdictional system with the judiciary branch responsible for civil law and criminal law, and the administrative branch having jurisdiction when a government institution is involved. Most claims against the national or local governments as well as claims against private bodies providing public services are handled by administrative courts, which use the "Conseil d'État" (Council of State) as a court of last resort for both ordinary and special courts. The main administrative courts are the "tribunaux administratifs" and appeal courts are the "cours administratives d'appel". Special administrative courts include the National Court of Asylum Right as well as military, medical and judicial disciplinary bodies. The French body of administrative law is called "droit administratif". Over the course of their history, France's administrative courts have developed an extensive and coherent case law ("jurisprudence constante") and legal doctrine (' and '), often before similar concepts were enshrined in constitutional and legal texts. These principes include: French administrative law, the basis of continental administrative law, has had a strong influence on administrative laws in several other countries such as Belgium, Greece, Turkey and Tunisia. Germany. In Germany administrative law is called , which generally governs the relationship between authorities and citizens. It establishes citizens' rights and obligations. It is part of the public law, which deals with the organization, the tasks and the acting of the public administration. It also contains rules, regulations, orders and decisions created by and related to administrative agencies, such as federal agencies, federal state authorities, urban administrations, but also admissions offices and fiscal authorities etc. Administrative law in Germany follows three basic principles. Administrative law in Germany can be divided into general administrative law and special administrative law. General administrative law. The general administration law is basically ruled in the administrative procedures law ("Verwaltungsverfahrensgesetz" [VwVfG]). Other legal sources are the Rules of the Administrative Courts (Verwaltungsgerichtsordnung [VwGO]), the social security code (Sozialgesetzbuch [SGB]) and the general fiscal law (Abgabenordnung [AO]). Administrative procedures Law. The "Verwaltungsverfahrensgesetz" (VwVfG), which was enacted in 1977, regulates the main administrative procedures of the federal government. It serves to ensure accordance with the rule of law by the public authority. Furthermore, it contains the regulations for mass processes and expands the legal protection against the authorities. The VwVfG basically applies for the entire public administrative activities of federal agencies as well as federal state authorities, in case of making federal law. One of the central clause is § 35 VwVfG. It defines the administrative act, the most common form of action in which the public administration occurs against a citizen. The definition in § 35 says, that an administration act is characterized by the following features: It is an official act of an authority in the field of public law to resolve an individual case with effect to the outside. §§ 36 – 39, §§ 58 – 59 and § 80 VwV––fG rule the structure and the necessary elements of the administrative act. § 48 and § 49 VwVfG have a high relevance in practice, as well. In these paragraphs, the prerequisites for redemption of an unlawful administration act (§ 48 VwVfG ) and withdrawal of a lawful administration act (§ 49 VwVfG), are listed. Other legal sources. Administration procedural law (Verwaltungsgerichtsordnung [VwGO]), which was enacted in 1960, rules the court procedures at the administrative court. The VwGO is divided into five parts, which are the constitution of the courts, action, remedies and retrial, costs and enforcement15 and final clauses and temporary arrangements. In absence of a rule, the VwGO is supplemented by the code of civil procedure (Zivilprozessordnung [ZPO]) and the judicature act (Gerichtsverfassungsgesetz [GVG]). In addition to the regulation of the administrative procedure, the VwVfG also constitutes the legal protection in administrative law beyond the court procedure. § 68 VwVGO rules the preliminary proceeding, called "Vorverfahren" or "Widerspruchsverfahren", which is a stringent prerequisite for the administrative procedure, if an action for rescission or a writ of mandamus against an authority is aimed. The preliminary proceeding gives each citizen, feeling unlawfully mistreated by an authority, the possibility to object and to force a review of an administrative act without going to court. The prerequisites to open the public law remedy are listed in § 40 I VwGO. Therefore, it is necessary to have the existence of a conflict in public law without any constitutional aspects and no assignment to another jurisdiction. The social security code (Sozialgesetzbuch [SGB]) and the general fiscal law are less important for the administrative law. They supplement the VwVfG and the VwGO in the fields of taxation and social legislation, such as social welfare or financial support for students (BaFÖG) etc. Special administrative law. The special administrative law consists of various laws. Each special sector has its own law. The most important ones are the In Germany, the highest administrative court for most matters is the federal administrative court . There are federal courts with special jurisdiction in the fields of social security law () and tax law (). Italy. In Italy administrative law is known as , a branch of public law whose rules govern the organization of the public administration and the activities of the pursuit of the public interest of the public administration and the relationship between this and the citizens. Its genesis is related to the principle of division of powers of the State. The administrative power, originally called "executive", is to organize resources and people whose function is devolved to achieve the public interest objectives as defined by the law. Netherlands. In the Netherlands administrative law provisions are usually contained in the various laws about public services and regulations. There is however also a single General Administrative Law Act ( or Awb), which is a rather good sample of procedural laws in Europe. It applies both to the making of administrative decisions and the judicial review of these decisions in courts. Another act about judicial procedures in general is the (General time provisions act), with general provisions about time schedules in procedures. On the basis of the Awb, citizens can oppose a decision () made by an administrative agency () within the administration and apply for judicial review in courts if unsuccessful. Before going to court, citizens must usually first object to the decision with the administrative body who made it. This is called . This procedure allows for the administrative body to correct possible mistakes themselves and is used to filter cases before going to court. Sometimes, instead of , a different system is used called (administrative appeal). The difference with is that is filed with a different administrative body, usually a higher ranking one, than the administrative body that made the primary decision. is available only if the law on which the primary decision is based specifically provides for it. An example involves objecting to a traffic ticket with the district attorney (), after which the decision can be appealed in court. Unlike France or Germany, there are no special administrative courts of first instance in the Netherlands, but regular courts have an administrative "chamber" which specializes in administrative appeals. The courts of appeal in administrative cases however are specialized depending on the case, but most administrative appeals end up in the judicial section of the Council of State (Raad van State). Sweden. In Sweden, there is a system of administrative courts that considers only administrative law cases, and is completely separate from the system of general courts. This system has three tiers, with 12 county administrative courts () as the first tier, four administrative courts of appeal () as the second tier, and the Supreme Administrative Court of Sweden () as the third tier. Migration cases are handled in a two-tier system, effectively within the system general administrative courts. Three of the administrative courts serve as migration courts () with the Administrative Court of Appeal in Stockholm serving as the Migration Court of Appeal (). Taiwan (ROC). In Taiwan the recently enacted "Constitutional Procedure Act" (憲法訴訟法) in 2019 (former "Constitutional Interpretation Procedure Act, 1993"), the Justices of the Constitutional Court of Judicial Yuan of Taiwan is in charge of judicial interpretation. As of 2019, this council has made 757 interpretations. Turkey. In Turkey, the lawsuits against the acts and actions of the national or local governments and public bodies are handled by administrative courts which are the main administrative courts. The decisions of the administrative courts are checked by the Regional Administrative Courts and Council of State. Council of State as a court of last resort is exactly similar to Conseil d'État in France. Ukraine. Administrative law in Ukraine is a homogeneous legal substance isolated in a system of jurisprudence characterized as: (1) a branch of law; (2) a science; (3) a discipline. In common law countries. Generally speaking, most countries that follow the principles of common law have developed procedures for judicial review that limit the reviewability of decisions made by administrative law bodies. Often these procedures are coupled with legislation or other common law doctrines that establish standards for proper rulemaking. Administrative law may also apply to review of decisions of so-called semi-public bodies, such as non-profit corporations, disciplinary boards, and other decision-making bodies that affect the legal rights of members of a particular group or entity. While administrative decision-making bodies are often controlled by larger governmental units, their decisions could be reviewed by a court of general jurisdiction under some principle of judicial review based upon due process (United States) or fundamental justice (Canada). Judicial review of administrative decisions is different from an administrative appeal. When sitting in review of a decision, the Court will only look at the method in which the decision was arrived at, whereas in an administrative appeal the correctness of the decision itself will be examined, usually by a higher body in the agency. This difference is vital in appreciating administrative law in common law countries. The scope of judicial review may be limited to certain questions of fairness, or whether the administrative action is "ultra vires". In terms of ultra vires actions in the broad sense, a reviewing court may set aside an administrative decision if it is unreasonable (under Canadian law, following the rejection of the "Patently Unreasonable" standard by the Supreme Court in Dunsmuir v New Brunswick), "Wednesbury" unreasonable (under British law), or arbitrary and capricious (under U.S. Administrative Procedure Act and New York State law). Administrative law, as laid down by the Supreme Court of India, has also recognized two more grounds of judicial review which were recognized but not applied by English Courts, namely legitimate expectation and proportionality. The powers to review administrative decisions are usually established by statute, but were originally developed from the royal prerogative writs of English law, such as the writ of mandamus and the writ of certiorari. In certain common law jurisdictions, such as India or Pakistan, the power to pass such writs is a Constitutionally guaranteed power. This power is seen as fundamental to the power of judicial review and an aspect of the independent judiciary. United States. In the United States, many government agencies are organized under the executive branch of government, although a few are part of the judicial or legislative branches. In the federal government, the executive branch, led by the president, controls the federal executive departments, which are led by secretaries who are members of the United States Cabinet. The many independent agencies of the United States government created by statutes enacted by Congress exist outside of the federal executive departments but are still part of the executive branch. Congress has also created some special judicial bodies known as Article I tribunals to handle some areas of administrative law. The actions of executive agencies and independent agencies are the main focus of American administrative law. In response to the rapid creation of new independent agencies in the early twentieth century (see discussion below), Congress enacted the Administrative Procedure Act (APA) in 1946. Many of the independent agencies operate as miniature versions of the tripartite federal government, with the authority to "legislate" (through rulemaking; see Federal Register and Code of Federal Regulations), "adjudicate" (through administrative hearings), and to "execute" administrative goals (through agency enforcement personnel). Because the United States Constitution sets no limits on this tripartite authority of administrative agencies, Congress enacted the APA to establish fair administrative law procedures to comply with the constitutional requirements of due process. Agency procedures are drawn from four sources of authority: the APA, organic statutes, agency rules, and informal agency practice. It is important to note, though, that agencies can only act within their congressionally delegated authority, and must comply with the requirements of the APA. At state level the first version of the Model State Administrative Procedure Act was promulgated and published in 1946 by the Uniform Law Commission (ULC), in which year the Federal Administrative Procedure Act was drafted. It is incorporated basic principles with only enough elaboration of detail to support essential features, therefore it is a "model", and not a "uniform", act. A model act is needed because state administrative law in the states is not uniform, and there are a variety of approaches used in the various states. Later it was modified in 1961 and 1981. The present version is the 2010 Model State Administrative Procedure Act (MSAPA) which maintains the continuity with earlier ones. The reason of the revision is that, in the past two decades state legislatures, dissatisfied with agency rule-making and adjudication, have enacted statutes that modify administrative adjudication and rule-making procedure. The American Bar Association's official journal concerning administrative law is the "Administrative Law Review", a quarterly publication that is managed and edited by students at the Washington College of Law. Historical development. Stephen Breyer, a U.S. Supreme Court Justice from 1994 to 2022, divides the history of administrative law in the United States into six discrete periods, in his book, "Administrative Law & Regulatory Policy" (3d Ed., 1992): Agriculture. The agricultural sector is one of the most heavily regulated sectors in the U.S. economy, as it is regulated in various ways at the international, federal, state, and local levels. Consequently, administrative law is a significant component of the discipline of agricultural law. The United States Department of Agriculture and its myriad agencies such as the Agricultural Marketing Service are the primary sources of regulatory activity, although other administrative bodies such as the Environmental Protection Agency play a significant regulatory role as well.
2563
Arthur Phillip
Admiral Arthur Phillip (11 October 1738 – 31 August 1814) was a British Royal Navy officer who served as the first governor of the Colony of New South Wales. Phillip was educated at Greenwich Hospital School from June 1751 until December 1753. He then became an apprentice on the whaling ship "Fortune". With the outbreak of the Seven Years' War against France, Phillip enlisted in the Royal Navy as captain's servant to Michael Everitt aboard . With Everitt, Phillip also served on and . Phillip was promoted to lieutenant on 7 June 1761, before being put on half-pay at the end of hostilities on 25 April 1763. Seconded to the Portuguese Navy in 1774, he served in the war against Spain. Returning to Royal Navy service in 1778, in 1782 Phillip, in command of , was to capture Spanish colonies in South America, but an armistice was concluded before he reached his destination. In 1784, Phillip was employed by Home Office Under Secretary Evan Nepean, to survey French defences in Europe. In 1786 Phillip was appointed by Lord Sydney as the commander of the First Fleet, a fleet of 11 ships whose crew were to establish a penal colony and a settlement at Botany Bay, New South Wales. On arriving at Botany Bay, Phillip found the site unsuitable and searched for a more habitable site for a settlement, which he found in Port Jackson – the site of Sydney, Australia, today. Phillip was a far-sighted governor who soon realised that New South Wales would need a civil administration and a system for emancipating convicts. However, his plan to bring skilled tradesmen on the First Fleet's voyage had been rejected. Consequently, he faced immense problems with labour, discipline, and supply. Phillip wanted harmonious relations with the local indigenous peoples, in the belief that everyone in the colony was a British citizen and was protected by the law as such, therefore the indigenous peoples had the same rights as everyone under Phillip's command. Eventually, cultural differences between the two groups of people led to conflict. The arrival of more convicts with the Second and Third Fleets placed new pressures on scarce local resources. By the time Phillip sailed home in December 1792, the colony was taking shape, with official land grants, systematic farming, and a water supply in place. On 11 December 1792, Phillip left the colony to return to Britain to receive medical treatment for kidney stones. He had planned to return to Australia, but medical advisors recommended he resign from the governorship. His health recovered and he returned to active duty in the Navy in 1796, holding a number of commands in home waters before being put in command of the Hampshire Sea Fencibles. He eventually retired from active naval service in 1805. He spent his final years of retirement in Bath, Somerset, before his death on 31 August 1814. As the first Governor of New South Wales, a number of places in Australia are named after him, including Port Phillip, Phillip Island, Phillip Street in Sydney, the suburb of Phillip in Canberra and the Governor Phillip Tower building in Sydney, as well as many streets, parks, and schools. Early life. Arthur Phillip was born on 11 October 1738, in the Parish of All Hallows, in Bread Street, London. He was the son of Jacob Phillip, an immigrant from Frankfurt, who by various accounts was a language teacher, a merchant vessel owner, a merchant captain, or a common seaman. His mother, Elizabeth Breach, was the widow of a common seaman by the name of John Herbert, who had died of disease in Jamaica aboard on 13 August 1732. At the time of Arthur Phillip's birth, his family maintained a modest existence as tenants near Cheapside in the City of London. There are no surviving records of Phillip's early childhood. His father, Jacob, died in 1739, after which the Phillip family would have a low income. Arthur went to sea on a British naval vessel aged nine. On 22 June 1751, he was accepted into the Greenwich Hospital School, a charity school for the sons of indigent seafarers. In accordance with the school's curriculum, his education focused on literacy, arithmetic, and navigational skills, including cartography. His headmaster, Reverend Francis Swinden, observed that in personality, Phillip was an "unassuming, reasonable, business-like to the smallest degree in everything he undertakes". Phillip remained at the Greenwich Hospital School for two and a half years, longer than the average student stay of one year. At the end of 1753, he was granted a seven-year indenture as an apprentice aboard "Fortune", a 210-ton whaling vessel commanded by merchant mariner William Readhead. Phillip left the Greenwich Hospital School on 1 December, and spent the next few months aboard the "Fortune", awaiting the start of the 1754 whaling season. Contemporary portraits depict Phillip as shorter than average, with an olive complexion and dark eyes. A long nose and a pronounced lower lip dominated his "smooth pear of a skull" as quoted by Robert Hughes. Early maritime career. Whaling and merchant expeditions. In April 1754 "Fortune" headed out to hunt whales near Svalbard in the Barents Sea. As an apprentice Phillip's responsibilities included stripping blubber from whale carcasses and helping to pack it into barrels. Food was scarce, and "Fortune"s 30 crew members supplemented their diet with bird's eggs, scurvy grass, and, where possible, reindeer. The ship returned to England on 20 July 1754. The whaling crew were paid and replaced with twelve sailors for a winter voyage to the Mediterranean. Phillip remained aboard as "Fortune" undertook an outward trading voyage to Barcelona and Livorno carrying salt and raisins, returning via Rotterdam with a cargo of grains and citrus. The ship returned to England in April 1755 and sailed immediately for Svalbard for that year's whale hunt. Phillip was still a member of the crew but abandoned his apprenticeship when the ship returned to England on 27 July. Royal Navy and the Seven Years' War. On 16 October 1755, Phillip enlisted in the Royal Navy as captain's servant aboard the 68-gun , commanded by his mother's cousin, Captain Michael Everitt. As a member of "Buckingham"s crew, Phillip served in home waters until April 1756 and then joined Admiral John Byng's Mediterranean fleet. The "Buckingham" was Rear-Admiral Temple West's flagship at the Battle of Minorca on 20 May 1756. Phillip moved on 1 August 1757, with Everitt, to the 90-gun , which took part in the Raid on St Malo on 5–12 June 1758. Phillip, again with Captain Everitt, transferred on 28 December 1758 to the 64-gun , which went to the West Indies to serve at the Siege of Havana. On 7 June 1761, Phillip was commissioned as a lieutenant in recognition for his active service. With the coming of peace on 25 April 1763, he was retired on half-pay. Retirement and the Portuguese Navy. In July 1763, Phillip married Margaret Charlotte Denison (), known as Charlott, a widow 16 years his senior, and moved to Glasshayes in Lyndhurst, Hampshire, establishing a farm there. The marriage was unhappy, and the couple separated in 1769 when Phillip returned to the Navy. The following year, he was posted as second lieutenant aboard , a newly built 74-gun ship of the line. In 1774, Phillip was seconded to the Portuguese Navy as a captain, serving in the war against Spain. While with the Portuguese Navy, Phillip commanded a 26-gun frigate, "Nossa Senhora do Pilar". On that ship, he took a detachment of troops from Rio de Janeiro to Colonia do Sacramento on the Río de la Plata (opposite Buenos Aires) to relieve the garrison there. The voyage also conveyed a consignment of convicts assigned to carry out work at Colonia. During a storm encountered in the course of the voyage, the convicts assisted in working the ship, and on arriving at Colonia, Phillip recommended that they be rewarded for saving the ship by remission of their sentences. A garbled version of this recommendation eventually found its way into the English press in 1786, when Phillip was appointed to lead the expedition to Sydney. Phillip played a leading role in the capture of the Spanish ship "San Agustín", on 19 April 1777, off Santa Catarina. The Portuguese Navy commissioned her as the "Santo Agostinho", under Phillip's command. The action was reported in the English press: Madrid, 28 Aug. Letters from Lisbon bring the following Account from Rio Janeiro: That the St. Augustine, of 70 Guns, having been separated from the Squadron of M. Casa Tilly, was attacked by two Portugueze Ships, against which they defended themselves for a Day and a Night, but being next Day surrounded by the Portugueze Fleet, was obliged to surrender. Recommissioned into Royal Navy. In 1778, with Britain again at war, Phillip was recalled to Royal Navy service and on 9 October was appointed first lieutenant of the 74-gun as part of the Channel fleet. Promoted to commander on 2 September 1779 and given command of the 8-gun fireship HMS "Basilisk". With Spain's entry into the conflict, Phillip had a series of private meetings with the First Lord of the Admiralty, the Earl of Sandwich, sharing his charts and knowledge about the South American coastlines. Phillip was promoted to post-captain on 30 November 1781 and given command of the 20-gun . "Ariadne" was sent to the Elbe to escort a transport ship carrying a detachment of Hanoverian troops, arriving at the port of Cuxhaven on 28 December, the estuary froze over trapping "Ariadne" in the harbour. In March 1782, Phillip arrived in England with the Hanoverian troops. In the following months "Ariadne" got a new lieutenant, Philip Gidley King, whom Phillip took under his wing. "Ariadne" was used to patrol the Channel where on 30 June, she captured the French frigate "Le Robecq". With a change of government on 27 March 1782, Sandwich retired from the Admiralty, Lord Germain was replaced as Secretary of State for Home and American Affairs by Earl of Shelburne, before 10 July 1782, in another change of government Thomas Townshend replaced him, and assumed responsibility for organising an expedition against Spanish America. Like Sandwich and Germain, he turned to Phillip for planning advice. The plan was for a squadron of three ships of the line and a frigate to mount a raid on Buenos Aires and Monte Video, then to proceed to the coasts of Chile, Peru, and Mexico to maraud, and ultimately to cross the Pacific to join the British Navy's East India squadron for an attack on Manila. On 27 December 1782, Phillip, took charge of the 64-gun . The expedition, consisting of the 70-gun , the 74-gun , "Europa", and the 32-gun frigate , sailed on 16 January 1783 under the command of Commodore Robert Kingsmill. Shortly after the ships' departure, an armistice was concluded between Great Britain and Spain. Phillip learnt of this in April when he put in for storm repairs at Rio de Janeiro. Phillip wrote to Townshend from Rio de Janeiro on 25 April 1783, expressing his disappointment that the ending of the American War had robbed him of the opportunity for naval glory in South America. Survey work in Europe. After his return to England in April 1784, Phillip remained in close contact with Townshend, now Lord Sydney, and Home Office Under Secretary Evan Nepean. From October 1784 to September 1786, Nepean, who was in charge of the Secret Service relating to the Bourbon Powers, France, and Spain, employed him to spy on the French naval arsenals at Toulon and other ports. There was fear that Britain would soon be at war with these powers as a consequence of the Batavian Revolution in the Netherlands. Colonial service. Lord Sandwich, together with the president of the Royal Society, Sir Joseph Banks, the scientist who had accompanied Lieutenant James Cook on his 1770 voyage, was advocating the establishment of a British colony in Botany Bay, New South Wales. Banks accepted an offer of assistance from the American loyalist James Matra in July 1783. Under Banks' guidance, Matra rapidly produced "A Proposal for Establishing a Settlement in New South Wales" (24 August 1783), with a fully developed set of reasons for a colony composed of American loyalists, Chinese, and South Sea Islanders (but not convicts). Thomas Townshend, Lord Sydney, as Secretary of State for the Home Office and minister in charge, decided to establish the proposed colony in Australia. This decision was taken for two reasons: the ending of the option to transport criminals to North America following the American Revolution, and the need for a base in the Pacific to counter French expansion. In September 1786, Phillip was appointed commodore of the fleet, which came to be known as the First Fleet. His assignment was to transport convicts and soldiers to establish a colony at Botany Bay. Upon arriving there, Phillip was to assume the powers of captain general and governor in chief of the new colony. A subsidiary colony was to be founded on Norfolk Island, as recommended by Sir John Call and Sir George Young, to take advantage of that island's native flax (harakeke) and timber for naval purposes. Voyage to Colony of New South Wales. On 25 October 1786, the 20-gun , lying in the dock at Deptford, was commissioned, with the command given to Phillip. The armed tender , under the command of Lieutenant Henry Lidgbird Ball, was also commissioned to join the expedition. On 15 December, Captain John Hunter was assigned as second captain to "Sirius" to command in the absence of Phillip, who as governor of the colony, would be where the seat of government was to be fixed. Phillip had a difficult time assembling the fleet, which was to make an eight-month sea voyage and then establish a colony. Everything a new colony might need had to be taken, since Phillip had no real idea of what he might find when he got there. There were few funds available for equipping the expedition. His suggestion that people with experience in farming, building, and crafts be included was rejected by the Home Office. Most of the 772 convicts were petty thieves from the London slums. A contingent of marines and a handful of other officers who were to administer the colony accompanied Phillip. The fleet of 11 ships and about 1,500 people, under Phillip's command, sailed from Portsmouth, England, on 13 May 1787; provided an escort out of British waters. On 3 June 1787, the fleet anchored at Santa Cruz, Tenerife. On 10 June they set sail to cross the Atlantic to Rio de Janeiro, taking advantage of favourable trade winds and ocean currents. The Fleet reached Rio de Janeiro on 5 August and stayed for a month to resupply. The Fleet left Rio de Janeiro on 4 September to run before the westerlies to Table Bay in Southern Africa, which it reached on 13 October; this was the last port of call before Botany Bay. On 25 November, Phillip transferred from the "Sirius" to the faster "Supply", and with the faster ships of the fleet hastened ahead to prepare for the arrival of the rest of the fleet. However, this "flying squadron", as Frost called it, reached Botany Bay only hours before the rest of the Fleet, so no preparatory work was possible. "Supply" reached Botany Bay on 18 January 1788; the three fastest transports in the advance group arrived on 19 January; slower ships, including "Sirius", arrived on 20 January. Phillip soon decided that the site, chosen on the recommendation of Sir Joseph Banks, who had accompanied James Cook in 1770, was not suitable, since it had poor soil, no secure anchorage, and no reliable water source. Cook was an explorer and Banks had a scientific interest, whereas Phillip's differing assessment of the site came from his perspective as, quoted by Tyrrell, "custodian of over a thousand convicts" for whom he was responsible. After some exploration, Phillip decided to go on to Port Jackson, and on 26 January, the marines and the convicts landed at a cove, which Phillip named for Lord Sydney. This date later became Australia's national day, Australia Day. Governor Phillip formally proclaimed the colony on 7 February 1788 in Sydney. Sydney Cove offered a fresh water supply and a safe harbour, which Phillip famously described as: "being with out exception the finest Harbour in the World [...] Here a Thousand Sail of the Line may ride in the most perfect Security." Establishing a settlement. On 26 January, the Union Jack was raised, and possession of the land was taken formally in the name of King George III. The next day, sailors from "Sirius", a party of marines, and a number of male convicts were disembarked to fell timber and clear the ground for the erection of tents. The remaining large company of male convicts disembarked from the transports over the following days. Phillip himself structured the ordering of the camp. His own tent as governor and those of his attendant staff and servants were set on the east side of Tank Stream, with the tents of the male convicts and marines on the west. During this time, priority was given to building permanent storehouses for the settlement's provisions. On 29 January, the governor's portable house was placed, and livestock were landed the next day. The female convicts disembarked on 6 February; the general camp for the women was to the north of the governor's house and separated from the male convicts by the houses of chaplain Richard Johnson and the Judge Advocate, Marine Captain David Collins. On 7 February 1788, Phillip and his government were formally inaugurated. On 15 February 1788, Phillip sent Lieutenant Philip Gidley King with a party of 23, including 15 convicts, to establish the colony at Norfolk Island, partly in response to a perceived threat of losing the island to the French, and partly to establish an alternative food source for the mainland colony. Governor of New South Wales. When Phillip was appointed as governor-designate of the colony and began to plan the expedition, he requested that the convicts that were being sent be trained; only twelve carpenters and a few men who knew anything about agriculture were sent. Seamen with technical and building skills were commandeered immediately. The colony's isolation meant that it took almost two years for Phillip to receive replies to his dispatches from his superiors in London. Phillip established a civil administration, with courts of law, that applied to everyone living in the settlement. Two convicts, Henry and Susannah Kable, sought to sue Duncan Sinclair, the captain of the "Alexander", for stealing their possessions during the voyage. Sinclair, believing that as convicts they had no protection from the law, as was the case in Britain, boasted that he could not be sued. Despite this, the court found for the plaintiffs and ordered the captain to make restitution for the theft of the Kables' possessions. Phillip had drawn up a detailed memorandum of his plans for the proposed new colony. In one paragraph he wrote: "The laws of this country [England] will of course, be introduced in [New] South Wales, and there is one that I would wish to take place from the moment his Majesty's forces take possession of the country: That there can be no slavery in a free land, and consequently no slaves." Nevertheless, Phillip believed in severe discipline; floggings and hangings were commonplace, although Phillip commuted many death sentences. The settlement's supplies were rationed equally to convicts, officers, and marines, and females were given two-thirds of the weekly males' rations. In late February, six convicts were brought before the criminal court for stealing supplies. They were sentenced to death; the ringleader, Thomas Barrett, was hanged that day. Phillip gave the rest a reprieve. They were banished to an island in the harbour and given only bread and water. The governor also expanded the settlement's knowledge of the landscape. Two officers from "Sirius", Captain John Hunter and Lieutenant William Bradley, conducted a thorough survey of the harbour at Sydney Cove. Phillip later joined them on an expedition to survey Broken Bay. The fleet's ships left over the next months, with "Sirius" and "Supply" remaining in the colony under command of the governor. They were used to survey and map the coastlines and waterways. Scurvy broke out, so "Sirius" left Port Jackson for Cape Town under the command of Hunter in October 1788, having been sent for supplies. The voyage, which completed a circumnavigation, returned to Sydney Cove in April, just in time to save the near-starving colony. As an experienced farmhand, Phillip's appointed servant Henry Edward Dodd, served as farm superintendent at Farm Cove, where he successfully cultivated the first crops, later moving to Rose Hill, where the soil was better. James Ruse, a convict, was later appointed to the position after Dodd died in 1791. When Ruse succeeded in the farming endeavours, he received the colony's first land grant. In June 1790, more convicts arrived with the Second Fleet, but , carrying more supplies, was disabled en route after hitting an iceberg, leaving the colony low on provisions again. "Supply", the only ship left under colonial command after "Sirius" was wrecked 19 March 1790 trying to land men and supplies on Norfolk Island, was sent to Batavia for supplies. In late 1792, Phillip, whose health was suffering, relinquished the governorship to Major Francis Grose, lieutenant-governor and commander of New South Wales Corps. On 11 December 1792, Phillip left for Britain, on the "Atlantic", which had arrived with convicts of the Third Fleet. Phillip was unable to follow his original intention of returning to Port Jackson once his health was restored, as medical advice compelled him to resign formally on 23 July 1793. Military personnel in colony. The main challenge for order and harmony in the settlement came not from the convicts secured there on terms of good behaviour, but from the attitude of officers from the New South Wales Marine Corps. As Commander in Chief, Phillip was in command of both the naval and marine forces; his naval officers readily obeyed his commands, but a measure of co-operation from the marine officers ran against their tradition. Major Robert Ross and his officers (with the exception of a few such as David Collins, Watkin Tench, and William Dawes) refused to do anything other than guard duty, claiming that they were neither gaolers, supervisors, nor policemen. Four companies of marines, consisting of 160 privates with 52 officers and NCO's, accompanied the First Fleet to Botany Bay. In addition, there were 34 officers and men serving in the Ship's Complement of Marines aboard "Sirius" and "Supply", bringing the total to 246 who departed England. Ross supported and encouraged his fellow officers in their conflicts with Phillip, engaged in clashes of his own, and complained of the governor's actions to the Home Office. Phillip, more placid and forbearing in temperament, was anxious in the interests of the community as a whole to avoid friction between the civil and military authorities. Though firm in his attitude, he endeavoured to placate Ross, but to little effect. In the end, he solved the problem by ordering Ross to Norfolk Island on 5 March 1790 to replace the commandant there. Beginning with guards arriving with the Second and Third fleets, but officially with the arrival of on 22 September 1791, the New South Wales Marines were relieved by a newly formed British Army regiment of foot, the New South Wales Corps. On 18 December 1791, "Gorgon" left Port Jackson, taking home the larger part of the still-serving New South Wales Marines. There remained in New South Wales a company of active marines serving under Captain George Johnston, who had been Phillip's aide-de-camp, that transferred to the New South Wales Corps. Also remaining in the colony were discharged marines, many of whom became settlers. The official departure of the last serving marines from the colony was in December 1792, with Governor Phillip on "Atlantic". Major Francis Grose, commander of the New South Wales Corps, had replaced Ross as the Lieutenant-Governor and took over command of the colony when Phillip returned to Britain. Relations with indigenous peoples. Phillip's official orders with regard to Aboriginal people were to "conciliate their affections", to "live in amity and kindness with them", and to punish anyone who should "wantonly destroy them, or give them any unnecessary interruption in the exercise of their several occupations". The first meeting between the colonists and the Eora, Aboriginal people, happened in Botany Bay. When Phillip went ashore, gifts were exchanged, thus Phillip and the officers began their relationship with the Eora through gift-giving, hilarity, and dancing, but also by showing them what their guns could do. Anyone found harming or killing Aboriginal people without provocation would be severely punished. After the early meetings, dancing, and musket demonstrations, the Eora avoided the settlement in Sydney Cove for the first year, but they warned and then attacked whenever colonists trespassed on their lands away from the settlement. Part of Phillip's early plan for peaceful cohabitation had been to persuade some Eora, preferably a family, to come and live in the town with the British so that the colonists could learn about the Eora's language, beliefs, and customs. By the end of the first year, as none of the Eora had come to live in the settlement, Phillip decided on a more ruthless strategy, and ordered the capture of some Eora warriors. The man who was captured was Arabanoo, from whom Phillip and his officers started to learn language and customs. Arabanoo died in April 1789 of smallpox, which also ravaged the rest of the Eora population. Phillip again ordered the boats to Manly Cove, where two more warriors were captured, Coleby and Bennelong; Coleby soon escaped, but Bennelong remained. Bennelong and Phillip formed a kind of friendship, before he too escaped. Four months after Bennelong escaped from Sydney, Phillip was invited to a whale feast at Manly. Bennelong greeted him in a friendly and jovial way. Phillip was suddenly surrounded by warriors and speared in the shoulder by a man called Willemering. He ordered his men not to retaliate. Phillip, perhaps realising that the spearing was in retaliation for the kidnapping, ordered no actions to be taken over it. Friendly relations were reestablished afterwards, with Bennelong even returning to Sydney with his family. Even though there were now friendly relations with the Indigenous people around Sydney Cove, the same couldn't be said about the ones around Botany Bay, who had killed or wounded 17 colonists. Phillip despatched orders, as quoted by Tench, "to put to death ten... [and] cut off the heads of the slain... to infuse a universal terror, which might operate to prevent further mischief". Even though two expeditions were despatched under command of Watkin Tench, no one was apprehended. On 11 December 1792, when Phillip returned to Britain, Bennelong and another Aboriginal man named Yemmerrawanne (or Imeerawanyee) travelled with him on the "Atlantic". Later life and death. Phillip's estranged wife, Charlott, died 3 August 1792 and was buried in St Beuno's Churchyard, Llanycil, Bala, Merionethshire. Phillip, a resident in Marylebone, married Isabella Whitehead of Bath in St Marylebone Church of England on 8 May 1794. His health recovered, he was recommissioned in March 1796 to the 74-gun as part of the Channel fleet. In October, his command was switched to the 74-gun . In September 1797, Phillip was transferred again to the 90-gun , command of which he held until December of that year. During 1798–99, Phillip commanded the Hampshire Sea Fencibles, then appointed inspector of the Impress Service, in which capacity he and a secretary toured the outposts of Britain to report on the strengths of the various posts. In the ordinary course of events he was promoted to Rear-Admiral on 1 January 1801. Phillip retired in 1805 from active service in the Navy, was promoted to Vice-Admiral on 13 December 1806, and received a final promotion to Admiral of the Blue on 4 June 1814. Phillip suffered a stroke in 1808, which left him partially paralysed. He died 31 August 1814 at his residence, 19 Bennett Street, Bath. He was buried nearby at St Nicholas's Church, Bathampton. His Last Will and Testament has been transcribed and is online. Forgotten for many years, the grave was discovered in November 1897 by a young woman cleaning the church, who found the name after lifting matting from the floor; the historian James Bonwick had been searching Bath records for its location. An annual service of remembrance is held at the church around Phillip's birthdate by the Britain–Australia Society. In 2007, Geoffrey Robertson QC alleged that Phillip's remains were no longer in St Nicholas Church, Bathampton, and had been lost: "Captain Arthur Phillip is not where the ledger stone says he is: it may be that he is buried somewhere outside, it may simply be that he is simply lost. But he is not where Australians have been led to believe that he now lies." Legacy. A number of places in Australia bear Phillip's name, including Port Phillip, Phillip Island (Victoria), Phillip Island (Norfolk Island), Phillip Street in Sydney, the federal electorate of Phillip (1949–1993), the suburb of Phillip in Canberra, the Governor Phillip Tower building in Sydney, St Phillip's Church, Sydney (now St Philip's), and many streets, parks, and schools, including a state high school in Parramatta. A monument to Phillip in Bath Abbey Church was unveiled in 1937. Another was unveiled at St Mildred's Church, Bread Street, London, in 1932; that church was destroyed in the London Blitz in 1940, but the principal elements of the monument were re-erected at the west end of Watling Street, near Saint Paul's Cathedral, in 1968. A different bust and memorial is inside the nearby church of St Mary-le-Bow. There is a statue of him in the Royal Botanical Gardens, Sydney. There is a portrait of him by Francis Wheatley in the National Portrait Gallery, London, and in the Mitchell Library, State Library of New South Wales, Sydney. Percival Serle wrote of Phillip in his "Dictionary of Australian Biography": 200th anniversary. As part of a series of events on the bicentenary of his death, a memorial was dedicated in Westminster Abbey on 9 July 2014. In the service, the Dean of Westminster, Very Reverend Dr John Hall, described Phillip as follows: "This modest, yet world-class seaman, linguist, and patriot, whose selfless service laid the secure foundations on which was developed the Commonwealth of Australia, will always be remembered and honoured alongside other pioneers and inventors here in the Nave: David Livingstone, Thomas Cochrane, and Isaac Newton." A similar memorial was unveiled by the outgoing 37th Governor of New South Wales, Marie Bashir, in St James' Church, Sydney, on 31 August 2014. A bronze bust was installed at the Museum of Sydney, and a full-day symposium discussed his contributions to the founding of modern Australia. In popular culture. Phillip has been featured in a number of movies and television programs, for example he is portrayed by Sir Cedric Hardwicke, in John Farrow's 1953 film "Botany Bay", Sam Neill in the 2005 film "The Incredible Journey of Mary Bryant" and David Wenham in the 2015 mini-series "Banished". He is a prominent character in Timberlake Wertenbaker's play "Our Country's Good", in which he commissions Lieutenant Ralph Clark to stage a production of "The Recruiting Officer". He is shown as compassionate and just, but receives little support from his fellow officers.
2573
Angus, Scotland
Angus (; ) is one of the 32 local government council areas of Scotland, a registration county and a lieutenancy area. The council area borders Aberdeenshire, Dundee City and Perth and Kinross. Main industries include agriculture and fishing. Global pharmaceuticals company GSK has a significant presence in Montrose in the north of the county. Angus was historically a province, and later a sheriffdom and county (known officially as Forfarshire from the 18th century until 1928), bordering Kincardineshire to the north-east, Aberdeenshire to the north and Perthshire to the west; southwards it faced Fife across the Firth of Tay; these remain the borders of Angus, minus Dundee which now forms its own small separate council area. Angus remains a registration county and a lieutenancy area. In 1975 some of its administrative functions were transferred to the council district of the Tayside Region, and in 1995 further reform resulted in the establishment of the unitary Angus Council. History. Etymology. The name "Angus" indicates the territory of the eighth-century Pictish king of that name. Prehistory. The area that now comprises Angus has been occupied since at least the Neolithic period. Material taken from postholes from an enclosure at Douglasmuir, near Friockheim, about five miles north of Arbroath has been radiocarbon dated to around 3500 BC. The function of the enclosure is unknown, but may have been for agriculture or for ceremonial purposes. Bronze Age archaeology is to be found in abundance in the area. Examples include the short-cist burials found near West Newbigging, about a mile to the North of the town. These burials included pottery urns, a pair of silver discs and a gold armlet. Iron Age archaeology is also well represented, for example in the souterrain nearby Warddykes cemetery and at West Grange of Conan, as well as the better-known examples at Carlungie and Ardestie. Medieval and later history. The county is traditionally associated with the Pictish territory of Circin, which is thought to have encompassed Angus and the Mearns. Bordering it were the kingdoms of Cé (Mar and Buchan) to the North, Fotla (Atholl) to the West, and Fib (Fife) to the South. The most visible remnants of the Pictish age are the numerous sculptured stones that can be found throughout Angus. Of particular note are the collections found at Aberlemno, St Vigeans, Kirriemuir and Monifieth. Angus is first recorded as one of the provinces of Scotland in 937, when Dubacan, the Mormaer of Angus, is recorded in the "Chronicle of the Kings of Alba" as having died at the Battle of Brunanburh. The signing of the Declaration of Arbroath at Arbroath Abbey in 1320 marked Scotland's establishment as an independent nation. Partly on this basis, Angus is marketed as the birthplace of Scotland. It is an area of rich history from Pictish times onwards. Notable historic sites in addition to Arbroath Abbey include Glamis Castle, Arbroath Signal Tower museum and the Bell Rock Lighthouse, described as one of the Seven Wonders of the Industrial World. Geography. Angus can be split into three geographic areas. To the north and west, the topography is mountainous. This is the area of the Grampian Mountains, Mounth hills and Five Glens of Angus, which is sparsely populated and where the main industry is hill farming. Glas Maol – the highest point in Angus at 1,068 m (3,504 ft) – can be found here, on the tripoint boundary with Perthshire and Aberdeenshire. To the south and east the topography consists of rolling hills (such as the Sidlaws) bordering the sea; this area is well populated, with the larger towns. In between lies Strathmore ("the Great Valley"), which is a fertile agricultural area noted for the growing of potatoes, soft fruit and the raising of Aberdeen Angus cattle. Montrose in the north east of the county is notable for its tidal basin and wildlife. Angus's coast is fairly regular, the most prominent features being the headlands of Scurdie Ness and Buddon Ness. The main bodies of water in the county are Loch Lee, Loch Brandy, Carlochy, Loch Wharral, Den of Ogil Reservoir, Loch of Forfar, Loch Fithie, Rescobie Loch, Balgavies Loch, Crombie Reservoir, Monikie Reservoirs, Long Loch, Lundie Loch, Loch of Kinnordy, Loch of Lintrathen, Backwater Reservoir, Auchintaple Loch, Loch Shandra, and Loch Esk. Demography. Population structure. In the 2001 census, the population of Angus was recorded as 108,400. 20.14% were under the age of 16, 63.15% were between 16 and 65 and 18.05% were aged 65 or above. Of the 16 to 74 age group, 32.84% had no formal qualifications, 27.08% were educated to 'O' Grade/Standard Grade level, 14.38% to Higher level, 7.64% to HND or equivalent level and 18.06% to degree level. Language in Angus. The most recent available census results (2001) show that Gaelic is spoken by 0.45% of the Angus population. This, similar to other lowland areas, is lower than the national average of 1.16%. These figures are self-reported and are not broken down into levels of fluency. Meanwhile, the 2011 census found that 38.4% of the population in Angus can speak Scots, above the Scottish average of 30.1%. This puts Angus as the council area with the sixth highest proficiency in Scots, behind only Shetland, Orkney, Moray, Aberdeenshire, and East Ayrshire. Historically, the dominant language in Angus was Pictish until the sixth to seventh centuries AD when the area became progressively gaelicised, with Pictish extinct by the mid-ninth century. Gaelic/Middle Irish began to retreat from lowland areas in the late-eleventh century and was absent from the Eastern lowlands by the fourteenth century. It was replaced there by Middle Scots, the contemporary local South Northern dialect of Modern Scots, while Gaelic persisted as a majority language in the Highlands and Hebrides until the 19th century. Angus Council are planning to raise the status of Gaelic in the county by adopting a series of measures, including bilingual road signage, communications, vehicle livery and staffing. Government. Local government. The Local Government (Scotland) Act 1889 established a uniform system of county councils in Scotland and realigned the boundaries of many of Scotland's counties. Subsequently, Angus County Council was created in 1890. In May 1975 the county council was abolished and its functions were transferred to Tayside Regional Council: the local area was served by Angus District Council. The county council was based at the County Buildings in Market Street in Forfar. Angus Council is one of the 32 local government council areas of Scotland after the two-tier local government council was abolished and Angus was established as one of the replacement single-tier Council Areas in 1996. As of May 2017 there are 28 seats on the council. From the May 2022 elections the seats are held as follows – SNP 13, Independent 7, Conservative 7, Labour 2. The boundaries of the present council area are the same as those of the historic county minus the City of Dundee. The council area borders Aberdeenshire, Dundee City and Perth and Kinross. Structure. The council's civic head is the Provost of Angus. There have been seven Provosts since its establishment in 1996 – Frances Duncan, Bill Middleton, Ruth Leslie-Melville, Helen Oswald, Alex King, Ronnie Proctor and Brian Boyd. Angus is also a lieutenancy area; the Lord Lieutenant of Angus is appointed by the monarch and is unconnected to the council. The council has had four Chief Executives since its formation – Sandy Watson 1996–2006, David Sawers 2006–2011, Richard Stiff 2011–2017 and Margo Williamson 2017 to date. Margo Williamson is the first female Chief Executive since the council was formed. Leadership. The role of provost is largely ceremonial in Angus. Political leadership is instead provided by the leader of the council. The leaders since 1996 have been: Premises. Council meetings are generally held at Forfar Town and County Hall at The Cross in the centre of Forfar. In 2007 the council moved its main offices to a new building called Angus House on Silvie Way in the Orchardbank Business Park on the outskirts of Forfar. Community council areas. Angus is divided into 25 community council areas and all apart from Friockheim district have an active council. The areas are: Aberlemno; Auchterhouse; Carnoustie; City of Brechin & District; Ferryden & Craig; Friockheim & District; Glamis; Hillside, Dun, & Logie Pert; Inverarity; Inveresk; Kirriemuir; Kirriemuir Landward East; Kirriemuir Landward West; Letham & District; Lunanhead & District; Monifieth; Monikie & Newbigging; Montrose; Muirhead, Birkhill and Liff; Murroes & Wellbank; Newtyle & Eassie; Royal Burgh of Arbroath; Royal Burgh of Forfar; Strathmartine; and Tealing. Parliamentary representation. UK Parliament. Angus is represented by three MPs for the UK Parliament. Scottish Parliament. Angus is represented by two constituency MSPs for the Scottish Parliament. In addition to the two constituency MSPs, Angus is also represented by seven MSPs for the North East Scotland electoral region. Transport. The Edinburgh-Aberdeen railway line runs along the coast, through Dundee and the towns of Monifieth, Carnoustie, Arbroath and Montrose. There is a small airport at Dundee, which at present operates flights to London and Belfast. Settlements. Largest settlements by population: Surnames. Most common surnames in Angus (Forfarshire) at the time of the United Kingdom Census of 1881:
2575
André the Giant
André René Roussimoff (; 19 May 1946 – 28 January 1993), better known by his ring name André the Giant, was a French professional wrestler and actor. Known as "The Eighth Wonder of the World," Roussimoff was known for his great size, which was a result of gigantism caused by excess growth hormones. Beginning his career in 1966, Roussimoff relocated to North America in 1971. From 1973 to the mid-1980s, Roussimoff was booked by World Wide Wrestling Federation (WWWF) promoter Vincent J. McMahon as a roving "special attraction" who wrestled for promotions throughout the United States, as well as in Japan for New Japan Pro-Wrestling. During the 1980s wrestling boom, Roussimoff became a mainstay of the WWWF (by then renamed the World Wrestling Federation), being paired with the villainous manager Bobby Heenan and feuding with Hulk Hogan. The two headlined WrestleMania III in 1987, and in 1988, he defeated Hogan to win the WWF Championship, his sole world heavyweight championship, on the first episode of "The Main Event". As his WWF career wound down after WrestleMania VI in 1990, Roussimoff wrestled primarily for All Japan Pro-Wrestling, usually alongside Giant Baba, until his sudden death. After his death in 1993, Roussimoff became the inaugural inductee into the newly created WWF Hall of Fame. He was later a charter member of the "Wrestling Observer Newsletter" Hall of Fame and the Professional Wrestling Hall of Fame; the latter describes him as being "one of the most recognizable figures in the world both as a professional wrestler and as a pop culture icon." Outside of wrestling, Roussimoff is best known for appearing as Fezzik, the giant in the 1987 film "The Princess Bride". Early life. André René Roussimoff was born on 19 May 1946 in Coulommiers, Seine-et-Marne, the son of immigrants Boris Roussimoff (1907–1993) and Mariann Roussimoff Stoeff (1910–1997); his father was Bulgarian and his mother was Polish. He had two older siblings and two younger. His childhood nickname was Dédé (, ). At birth, André weighed ; as a child, he displayed symptoms of gigantism, and was noted as "a good head taller than other kids", with abnormally long hands. In a 1970s television interview, Roussimoff stated that his mother was tall and his father tall, and that according to his father his grandfather was tall. By the time he was 12, Roussimoff stood . Roussimoff was an average student, though good at mathematics. After finishing school at 14, as he did not think higher education was necessary for a farm laborer, he joined the workforce; contrary to popular legend, he did not drop out of school, as compulsory education in France at the time ended at 14. Roussimoff spent years working on his father's farm in Molien, where, according to his brother Jacques, he could perform the work of three men. He also completed an apprenticeship in woodworking, and next worked in a factory that manufactured engines for hay balers. None of these brought him any satisfaction. While Roussimoff was growing up in the 1950s, the Irish playwright Samuel Beckett was one of several adults who sometimes drove local children to school, including Roussimoff and his siblings. They had a surprising amount of common ground and bonded over their love of cricket, with Roussimoff recalling that the two rarely talked about anything else. Professional wrestling career. Early career (1964–1973). At the age of 18, Roussimoff moved to Paris and was taught professional wrestling by a local promoter, Robert Lageat, who recognized the earning potential of Roussimoff's size. He trained at night and worked as a mover during the day to pay living expenses. Roussimoff was billed as "Géant Ferré", a name based on the Picardian folk hero , and began wrestling in Paris and nearby areas. Canadian promoter and wrestler Frank Valois met Roussimoff in 1966, years later to become his business manager and adviser. Roussimoff began making a name for himself wrestling in the United Kingdom, Germany, Australia, New Zealand, and Africa. He made his Japanese debut for the International Wrestling Enterprise in 1970, billed as "Monster Roussimoff". Wrestling as both a singles and tag-team competitor, he quickly was made the company's tag-team champion alongside Michael Nador. During his time in Japan, doctors first informed Roussimoff that he suffered from acromegaly. Roussimoff next moved to Montreal, Canada in 1971, where he became an immediate success, regularly selling out the Montreal Forum. Promoters eventually ran out of plausible opponents for him and, as the novelty of his size wore off, the gate receipts dwindled. Roussimoff was defeated by Adnan Al-Kaissie in Baghdad in 1971, and wrestled numerous times in 1971 for Verne Gagne's American Wrestling Association (AWA) as a special attraction. Touring special attraction (1973–1984). In 1973, Vincent J. McMahon, founder of the World Wide Wrestling Federation (WWWF), suggested several changes to Roussimoff's booking and presentation. He felt Roussimoff should be portrayed as a large, immovable monster, and to enhance the perception of his size, McMahon discouraged Roussimoff from performing maneuvers such as dropkicks (although he was capable of performing such agile maneuvers before his health deteriorated in later life). He also began billing Roussimoff as "André the Giant" and set up a travel-intensive schedule, lending him to wrestling associations around the world, to keep him from becoming overexposed in any area. Promoters had to guarantee Roussimoff a certain amount of money as well as pay McMahon's WWF booking fee. On 24 March 1973, Roussimoff debuted in the World Wide Wrestling Federation (later World Wrestling Federation) as a fan favorite, defeating Frank Valois and Bull Pometti in a handicap match in Philadelphia. Two days later he made his debut in New York's Madison Square Garden, defeating Buddy Wolfe. Roussimoff was one of professional wrestling's most beloved babyfaces throughout the 1970s and early 1980s. As such, Gorilla Monsoon often stated that Roussimoff had not been defeated in 15 years by pinfall or submission prior to WrestleMania III. He had lost matches outside of the WWF: a loss to Adnan Al-Kaissie in Baghdad, Iraq in 1971, pinfall losses to Don Leo Jonathan in Montreal in 1972, Killer Kowalski in Quebec City in 1972 two draws and a count out lost to The Sheik in Toronto in 1974 after a fireball was thrown in Andre's face, knockout to Jerry Lawler in Memphis in 1975 and a count out to Lawler in Louisville in 1977, draw with Bobo Brazil at a battle royal in Detroit in 1976, Ronnie Garvin in Knoxville in 1978, Stan Hansen by disqualification in Japan in 1981, Kamala by countout in Toronto in 1984 and Canek in Mexico in 1984 and submission losses in Japan to Strong Kobayashi in 1972 and Antonio Inoki in 1986. He also had sixty-minute time-limit draws with two of the three major world champions of the day, Harley Race in Houston in 1979 and Nick Bockwinkel in Chicago in 1976. In 1976, at the second Showdown at Shea, Roussimoff fought professional boxer Chuck Wepner in an unscripted boxer-versus-wrestler fight. The wild fight was shown via telecast as part of the undercard of the Muhammad Ali versus Antonio Inoki fight and ended when he threw Wepner over the top rope and outside the ring and won via count-out. In 1980, he feuded with Hulk Hogan, when, unlike their more famous matches in the late 1980s, Hogan was the villain and Roussimoff was the hero, wrestling him at Shea Stadium's third Showdown at Shea event and in Pennsylvania, where after Roussimoff pinned Hogan to win the match, Hogan bodyslammed him much like their legendary WrestleMania III match in 1987. The feud continued in Japan in 1982 and 1983 with their roles reversed and with Antonio Inoki also involved. One of Roussimoff's feuds pitted him against the "Mongolian Giant" Killer Khan. According to the storyline, Khan snapped Roussimoff's ankle during a match on 2 May 1981 in Rochester, New York by leaping off the top rope and crashing down upon it with his knee-drop. In reality, he had broken his ankle getting out of bed the morning before the match. The injury and subsequent rehabilitation was worked into the existing Roussimoff/Khan storyline. After a stay at Beth Israel Hospital in Boston, Roussimoff returned with payback on his mind. The two battled on 20 July 1981, at Madison Square Garden in a match that resulted in a double disqualification. Their feud continued as fans filled arenas up and down the east coast to witness their matches. On 14 November 1981 at the Philadelphia Spectrum, he decisively defeated Khan in what was billed as a "Mongolian stretcher match", in which the loser must be taken to the dressing room on a stretcher. The same type of match was also held in Toronto. In early 1982 the two also fought in a series of matches in Japan with Arnold Skaaland in Roussimoff's corner. World Wrestling Federation (1984–1991). Feud with the Heenan Family (1984–1987). In 1982, Vincent J. McMahon sold the World Wide Wrestling Federation to his son, Vince McMahon As McMahon began to expand his newly acquired promotion to the national level, he required his wrestlers to appear exclusively for him. McMahon signed Roussimoff to these terms in 1984, although he still allowed him to work in Japan for New Japan Pro-Wrestling (NJPW). Roussimoff feuded with Big John Studd over which of the two men was the "true giant" of wrestling. Throughout the early to mid-1980s, Roussimoff and Studd fought all over the world, battling to try to determine who the real giant of wrestling was. In 1984, Studd took the feud to a new level when he and partner Ken Patera knocked out Roussimoff during a televised tag-team match and proceeded to cut off his hair. After gaining revenge on Patera, Roussimoff met Studd in a "body slam challenge" at the first WrestleMania, held 31 March 1985, at Madison Square Garden in New York City. Roussimoff slammed Studd to win the match and collect the $15,000 prize, then proceeded to throw cash to the fans before having the bag taken from him by Studd's manager, Bobby "The Brain" Heenan. At WrestleMania 2 on 7 April 1986, Roussimoff continued to display his dominance by winning a twenty-man battle royal which featured top National Football League stars and wrestlers. He last eliminated Bret Hart to win the contest. Following a final tour with New Japan Pro-Wrestling in mid-1986, and a win in Austria over CWA World champion Otto Wanz, Roussimoff began appearing exclusively with the World Wrestling Federation. After WrestleMania 2, Roussimoff continued his feud with Studd and King Kong Bundy. Around this time, Roussimoff requested a leave of absence to tend to his health, since the effects from his acromegaly were beginning to take their toll, as well as to tour Japan. He had also been cast in the film "The Princess Bride". To explain his absence, a storyline was developed in which Heenan—suggesting that Roussimoff was secretly afraid of Studd and Bundy, whom Heenan bragged were unbeatable—challenged Roussimoff and a partner of his choosing to wrestle Studd and Bundy in a televised tag-team match. When Roussimoff failed to show, WWF president Jack Tunney indefinitely suspended him. Later in the summer of 1986, upon Roussimoff's return to the United States, he began wearing a mask and competing as the "Giant Machine" in a stable known as the Machines. Big Machine and Super Machine were the other members; Hulk Hogan (as "Hulk Machine") and Roddy Piper (as "Piper Machine") were also one-time members. The WWF's television announcers sold the Machines—a gimmick that was copied from the New Japan Pro-Wrestling character "Super Strong Machine", played by Japanese wrestler Junji Hirata, —as "a new tag-team from Japan" and claimed not to know the identities of the wrestlers, even though it was obvious to fans that it was Roussimoff competing as the Giant Machine. Heenan, Studd, and Bundy complained to Tunney, who eventually told Heenan that if it could be proven that Roussimoff and the Giant Machine were the same person, Roussimoff would be fired. Roussimoff thwarted Heenan, Studd, and Bundy at every turn. Then, in late 1986, the Giant Machine "disappeared" and Roussimoff was reinstated. Foreshadowing Roussimoff's heel turn, Heenan expressed his approval of the reinstatement but did not explain why. Alliance with Bobby Heenan and Ted DiBiase (1987–1989). Roussimoff agreed to turn heel in early 1987 to be the counter to the biggest "babyface" in professional wrestling at that time, Hulk Hogan. On an edition of "Piper's Pit" in 1987, Hogan was presented a trophy for being the WWF World Heavyweight Champion for three years; Roussimoff came out to congratulate him, shaking Hogan's hand with a strong grip, which surprised the Hulkster. On the following week's "Piper's Pit", Roussimoff was presented a slightly smaller trophy for being "the only undefeated wrestler in wrestling history." Although he had suffered a handful of countout and disqualification losses in WWF, he had never been pinned or forced to submit in a WWF ring. Hogan came out to congratulate him and ended up being the focal point of the interview. Apparently annoyed, Roussimoff walked out in the midst of Hogan's speech. A discussion between Roussimoff and Hogan was scheduled, and on a "Piper's Pit" that aired 7 February 1987, the two met. Hogan was introduced first, followed by Roussimoff, who was led by longtime rival Bobby Heenan. Speaking on behalf of his new protégé, Heenan accused Hogan of being Roussimoff's friend only so he would not have to defend his title against him. Hogan tried to reason with Roussimoff, but his pleas were ignored as he challenged Hogan to a match for the WWF World Heavyweight Championship at WrestleMania III. Hogan was still seemingly in disbelief as to what Roussimoff was doing, prompting Heenan to say "You can't believe it? Maybe you'll believe this, Hogan" before Roussimoff ripped off the T-shirt and crucifix from Hogan, with the crucifix scratching Hogan's chest, causing him to bleed. Following Hogan's acceptance of his challenge on a later edition of "Piper's Pit", the two were part of a 20-man over-the-top-rope battle-royal on 14 March edition of "Saturday Night's Main Event X" at the Joe Louis Arena in Detroit. Although the battle royal was won by Hercules, Roussimoff claimed to have gained a psychological advantage over Hogan when he threw the WWF World Heavyweight Champion over the top rope. The match, which was actually taped on 21 February 1987, aired only two weeks before WrestleMania III to make it seem like Hogan had met his match in André the Giant. At WrestleMania III, he was billed at , and the stress of such immense weight on his bones and joints resulted in constant pain. After recent back surgery, he was also wearing a brace underneath his wrestling singlet. In front of a record crowd, Hogan won the match after body-slamming Roussimoff (later dubbed "the bodyslam heard around the world"), followed by Hogan's running leg drop finisher. Years later, Hogan claimed that Roussimoff was so heavy, he felt more like , and that he tore his latissimus dorsi muscle when slamming him. Another myth about the match is that no one, not even WWF owner Vince McMahon, knew until the day of the event whether Roussimoff would lose the match. In reality, he agreed to lose the match sometime before, mostly for health reasons. Contrary to popular belief, it was not the first time that Hogan had successfully body-slammed him in a WWF match. A then-heel Hogan had slammed a then-face Roussimoff following their match at the Showdown at Shea on 9 August 1980, though Roussimoff was somewhat lighter (around ) and more athletic at the time (Hogan also slammed him in a match in Hamburg, Pennsylvania, a month later). This took place in the territorial days of American wrestling three years before WWF began national expansion, so many of those who watched WrestleMania III had never seen the Giant slammed (Roussimoff had also previously allowed Harley Race, El Canek and Stan Hansen, among others, to slam him). By the time of WrestleMania III, the WWF went national, giving more meaning to the Roussimoff–Hogan match that took place then. The feud between Roussimoff and Hogan simmered during the summer of 1987, as Roussimoff's health declined. The feud began heating up again when wrestlers were named the captains of rival teams at the inaugural Survivor Series event. During their approximately one minute of battling each other during the match, Hogan dominated Roussimoff and was on the brink of knocking him from the ring, but was tripped up by his partners, Bundy and One Man Gang, and would be counted out. Roussimoff went on to be the sole survivor of the match, pinning Bam Bam Bigelow before Hogan returned to the ring to attack André and knock him out of the ring. Roussimoff later got revenge when, after Hogan won a match against Bundy on "Saturday Night's Main Event", he snuck up from behind and began choking Hogan to the brink of unconsciousness, not letting go even after an army of seven face-aligned wrestlers ran to the ring to try to pull him away; it took Hacksaw Jim Duggan breaking a piece of wood over his back (which he no-sold) for him to let go, after which Hogan was pulled to safety. As was the case with the "SNME" battle royal a year earlier, the series of events was one of the pieces that helped build interest in a possible one-on-one rematch between Hogan and Roussimoff, and to make it seem that Roussimoff was certain to win easily when they did meet. Meanwhile Rousimoff returned to Germany in December 1987 for another match with Wanz, which he lost by countout. In the meantime, the "Million Dollar Man" Ted DiBiase failed to persuade Hogan to sell him the WWF World Heavyweight Championship. After failing to defeat Hogan in a subsequent series of matches, DiBiase turned to Roussimoff to win it for him. He and DiBiase had teamed several times in the past, including in Japan and in the WWF in the late 1970s and early 1980s when both were faces, but this was not acknowledged during this new storyline. The earlier attack and DiBiase's insertion into the feud set up the Hogan-Roussimoff rematch on "The Main Event", to air 5 February 1988, on a live broadcast on NBC. Acting as his hired gun, Roussimoff won the WWF World Heavyweight Championship from Hogan (his first singles title) in a match where it was later revealed that appointed referee Dave Hebner was "detained backstage", and a replacement (whom Hogan afterwards initially accused of having been paid by DiBiase to get plastic surgery to look like Dave, but was revealed to have been his evil twin brother, Earl Hebner), made a three count on Hogan while his shoulders were off the mat. After winning, Roussimoff "sold" the title to DiBiase; the transaction was declared invalid by then-WWF president Jack Tunney and the title was declared vacant. This was shown on WWF's NBC program "The Main Event". At WrestleMania IV, Roussimoff and Hulk Hogan fought to a double disqualification in a WWF title tournament match (with the idea in the storyline saying that Roussimoff was again working on DiBiase's behalf in giving DiBiase a clearer path in the tournament). Afterward, Roussimoff and Hogan's feud died down after a steel cage match held at "WrestleFest" on 31 July 1988, in Milwaukee. Hogan was the winner. At the inaugural SummerSlam pay-per-view held at Madison Square Garden, Roussimoff and DiBiase (billed as The Mega Bucks) faced Hogan and WWF World Heavyweight Champion "Macho Man" Randy Savage (known as The Mega Powers) in the main event, with Jesse "The Body" Ventura as the special guest referee. During the match, the Mega Powers' manager, Miss Elizabeth, distracted the Mega Bucks and Ventura when she climbed up on the ring apron, removed her yellow skirt and walked around in a pair of red panties. This allowed Hogan and Savage time to recover and eventually win the match with Hogan pinning DiBiase. Savage forced Ventura's hand down for the final three-count, due to Ventura's character historically being at odds with Hogan, and his unwillingness to count the fall. Concurrent with the developing feud with the Mega Powers, Roussimoff was placed in a feud with Jim Duggan, which began after Duggan knocked out Roussimoff with a two-by-four board during a television taping. Despite Duggan's popularity with fans, Roussimoff regularly got the upper hand in the feud. Roussimoff's next major feud was against Jake "The Snake" Roberts. In this storyline, it was said Roussimoff was afraid of snakes, something Roberts exposed on "Saturday Night's Main Event" when he threw his snake, Damien, on the frightened Roussimoff; as a result, he suffered a kayfabe mild heart attack and vowed revenge. During the next few weeks, Roberts frequently walked to ringside carrying his snake in its bag during Roussimoff's matches, causing the latter to run from the ring in fright. Throughout their feud (which culminated at WrestleMania V), Roberts constantly used Damien to gain a psychological edge over the much larger and stronger Roussimoff. In 1989, Roussimoff and the returning Big John Studd briefly reprised their feud, beginning at WrestleMania V, when Studd was the referee in the match with Roberts, this time with Studd as a face and Roussimoff as the heel. During the late summer and autumn of 1989, Roussimoff engaged in a brief feud, consisting almost entirely of house shows (non-televised events), and one televised match on October 28, 1989, at Madison Square Garden with then-WWF Intercontinental Champion The Ultimate Warrior. Roussimoff began to wear face paint with a similar design to The Warrior and began called himself "The Ultimate Giant" when he appeared on "The Brother Love Show". The younger Warrior, the WWF's rising star, regularly squashed the aging Roussimoff in an attempt to showcase his star quality and promote him as the "next big thing". Colossal Connection (1989–1990). In late 1989, Roussimoff was joined with fellow Heenan Family member Haku to form a new tag team called the Colossal Connection, in part to fill a void left by the departure of Tully Blanchard and Arn Anderson (the Brain Busters, who were also members of Heenan's stable) from the WWF, and also to continue to keep the aging Roussimoff in the main event spotlight. His last singles match was a loss to The Ultimate Warrior in 20 seconds at a house show in Cape Girardeau, Missouri on 11 December 1989. The Colossal Connection immediately targeted WWF Tag Team Champions Demolition (who had recently won the belts from the Brain Busters). At a television taping on 13 December 1989, the Colossal Connection defeated Demolition to win the titles. Roussimoff and Haku successfully defended their title, mostly against Demolition, until WrestleMania VI on 1 April 1990, when Demolition took advantage of a mistimed move by the champions to regain the belts. After the match, a furious Heenan blamed Roussimoff for the title loss and after shouting at him, slapped him in the face; an angry Roussimoff responded with a slap of his own that sent Heenan staggering from the ring. Roussimoff also caught Haku's kick attempt, sending him reeling from the ring as well, prompting support for Roussimoff and turning him face for the first time since 1987. Due to his ongoing health issues, Roussimoff was not able to wrestle at the time of Wrestlemania VI and Haku actually wrestled the entire match against Demolition without tagging him in. On weekend television shows following WrestleMania VI, Bobby Heenan vowed to spit in Roussimoff's face when he came crawling back to the Heenan Family. He wrestled one more time with Haku, teaming up to face Demolition on a house show in Honolulu on 10 April, Roussimoff was knocked out of the ring and The Colossal Connection lost via count-out. After the match, Roussimoff and Haku would fight each other, marking the end of the team. His final WWF match of 1990 came at a combined WWF/All Japan/New Japan show on 13 April in Tokyo, Japan when he teamed with Giant Baba to defeat Demolition in a non-title match. Roussimoff would win by gaining the pinfall on Smash. Sporadic appearances (1990–1991). Roussimoff returned in the winter of 1990, but it was not to the World Wrestling Federation. Instead, Roussimoff made an interview appearance for Herb Abrams' fledgling Universal Wrestling Federation on 11 October in Reseda, California. (the segment aired in 1991). He appeared in an interview segment with Captain Lou Albano and put over the UWF. The following month on 30 November at a house show in Miami, Florida, the World Wrestling Federation announced his return as a participant in the 1991 Royal Rumble (to be held in Miami two months later). Roussimoff was also mentioned as a participant on television but would ultimately back out due to a leg injury. His on-air return finally took place at the WWF's "Super-Stars & Stripes Forever" USA Network special on 17 March 1991, when he came out to shake the hand of Big Boss Man after an altercation with Mr. Perfect. The following week at WrestleMania VII, he came to the aid of the Boss Man in his match against Mr. Perfect. Roussimoff finally returned to action on 26 April 1991, in a six-man tag-team matchup when he teamed with The Rockers in a winning effort against Mr. Fuji and The Orient Express at a house show in Belfast, Northern Ireland. On 11 May 1991 he participated in a 17-man battle-royal at a house show in Detroit, which was won by Kerry Von Erich. This was Andre's final WWF match, although he was involved in several subsequent storylines. His last major WWF storyline following WrestleMania VII had the major heel managers (Bobby Heenan, Sensational Sherri, Slick, and Mr. Fuji) trying to recruit Roussimoff one-by-one, only to be turned down in various humiliating ways (e.g. Heenan had his hand crushed, Sherri received a spanking, Slick got locked in the trunk of the car he was offering to Roussimoff, and Mr. Fuji got a pie in his face). Finally, Jimmy Hart appeared live on "WWF Superstars" to announce that he had successfully signed Roussimoff to tag-team with Earthquake. When asked to confirm this by Gene Okerlund, Roussimoff denied the claims. This led to Earthquake's attacking Roussimoff from behind (injuring his knee). Jimmy Hart would later get revenge for the humiliation by secretly signing Tugboat and forming the Natural Disasters. This led to Roussimoff's final major WWF appearance at SummerSlam 1991, where he seconded the Bushwhackers in their match against the Disasters. Roussimoff was on crutches at ringside, and after the Disasters won the match, they set out to attack him, but the Legion of Doom made their way to ringside and got in between them and the Giant, who was preparing to defend himself with one of his crutches. The Disasters left the ringside area as they were outnumbered by the Legion of Doom, the Bushwhackers and Roussimoff, who struck both Earthquake and Typhoon (the former Tugboat) with the crutch as they left. His final WWF appearance came at a house show in Paris, France, on 9 October 1991. He was in Davey Boy Smith's corner as the Bulldog faced Earthquake; Smith hit Earthquake with Roussimoff's crutch, allowing Smith to win. All Japan Pro Wrestling; Universal Wrestling Association (1990–1992). After WrestleMania VI, Roussimoff spent the rest of his in-ring career in All Japan Pro Wrestling (AJPW) and Mexico's Universal Wrestling Association (UWA), where he performed under the name "André el Gigante". He toured with AJPW three times per year, from 1990 to 1992, usually teaming with Giant Baba in tag-team matches. Roussimoff made a couple of guest appearances for Herb Abrams' Universal Wrestling Federation, in 1991, feuding with Big John Studd, though he never had a match in the promotion. In his last U.S. television appearance, Andre appeared on World Championship Wrestling's (WCW) "Clash of the Champions XX" special that aired on TBS on 2 September 1992, where he gave a brief interview. During the same event, he appeared alongside Gordon Solie and was later seen talking with him during the gala celebrating the 20th anniversary of wrestling on TBS. He did his final tour of Mexico in 1992 in a selection of six-man tag matches alongside Bam Bam Bigelow and a variety of Lucha Libre stars facing among others Bad News Allen and future WWF Champions Mick Foley and Yokozuna. Roussimoff made his final tour with AJPW from October to December 1992; he wrestled what became the final match of his career on 4 December 1992, teaming with Giant Baba and Rusher Kimura to defeat Haruka Eigen, Masanobu Fuchi, and Motoshi Okuma. Acting career. Roussimoff branched out into acting again in the 1970s and 1980s, after a 1967 French boxing film, making his USA acting debut playing a Sasquatch ("Bigfoot") in a two-part episode aired in 1976 on the television series "The Six Million Dollar Man". He appeared in other television shows, including "The Greatest American Hero", "B. J. and the Bear", "The Fall Guy" and 1990's "Zorro". Towards the end of his career, Roussimoff appeared in several films. He had an uncredited appearance in the 1984 film "Conan the Destroyer" as Dagoth, the resurrected horned giant god who is killed by Conan (Arnold Schwarzenegger). That same year, he also made an appearance in "Micki & Maude" (billed as André Rousimmoff). He appeared most notably as Fezzik, his own favorite role, in the 1987 film "The Princess Bride". The fact that Roussimoff found that no one stared at him on set during production was a novel and particularly gratifying experience. Both the film and his performance retain a devoted following. In a short interview with Lanny Poffo, he stated that the movie meant so much to André that he made his wrestling pals watch an advanced copy of the VHS with him over and over again while supplying dinner, drinks, and sweetly asking each time, "Did you like my performance?". In his last film, he had a cameo role as a circus giant in the comedy "Trading Mom", which was released in 1994, a year after his death. Personal life. Roussimoff was mentioned in the "1974 Guinness Book of World Records" as the then-highest-paid wrestler in history. He earned an annual salary of approximately $400,000 () at this time. Robin Christensen is Roussimoff's only child. Her mother Jean Christensen (who died in 2008) became acquainted with her father through the wrestling business around 1972 or 1973. Christensen had almost no connection with her father and saw him only five times in her life, despite occasional televised and printed news pieces criticizing his absentee fatherhood. While she gave some interviews about the subject in her childhood, Christensen was reportedly reluctant to discuss her father later in life. In 1989, Roussimoff was arrested and charged with assault after he attacked a KCRG-TV cameraman shooting his match with The Ultimate Warrior at Cedar Rapids, Iowa's Five Seasons Center. While acquitted on the assault charge, he was fined $100 () for criminal mischief and ordered to pay KCRG $233 () in damage to its equipment. William Goldman, the author of the novel and the screenplay of "The Princess Bride", wrote in his nonfiction work "Which Lie Did I Tell?" that Roussimoff was one of the gentlest and most generous people he ever knew. Whenever Roussimoff ate with someone in a restaurant, he would pay, but he would also insist on paying when he was a guest. On one occasion, after Roussimoff attended a dinner with Arnold Schwarzenegger and Wilt Chamberlain, Schwarzenegger had quietly moved to the cashier to pay before Roussimoff could, but then found himself being physically lifted, carried from his table and deposited on top of his car by Roussimoff and Chamberlain. Roussimoff owned a ranch in Ellerbe, North Carolina, looked after by two of his close friends. When he was not on the road, he loved spending time at the ranch, where he tended to his cattle, played with his dogs, and entertained friends. While there were custom-made chairs and a few other modifications in his home to accommodate his size, tales that everything in his home was custom-made for a large man are said to be exaggerated. Since Roussimoff could not easily go shopping due to his fame and size, he was known to spend hours watching QVC and made frequent purchases from the shopping channel. Health. Roussimoff has been unofficially crowned "the greatest drunk on Earth" for once consuming 119 beers (in total, over ) in six hours. On Letterman, January 23, 1984, Roussimoff told David Letterman he drank 117 beers. When Letterman asked if he was drunk, Roussimoff said he couldn't remember because he passed out. He also said he quit drinking beer 14 months prior to this appearance on Letterman. On an episode of WWE's "Legends of Wrestling", Mike Graham said Roussimoff once drank 156 beers (over ) in one sitting, which was confirmed by Dusty Rhodes. The Fabulous Moolah wrote in her autobiography that Roussimoff drank 127 beers at the bar of the Abraham Lincoln Hotel in Reading, Pennsylvania and later passed out in the lobby. The staff could not move him and had to leave him there until he awoke. In a shoot interview, Ken Patera recalled an occasion where Roussimoff was challenged by Dick Murdoch to a beer drinking contest. After nine or so hours, Roussimoff had drunk 116 beers. A tale recounted by Cary Elwes in his book about the making of "The Princess Bride" has Roussimoff falling on top of somebody while drunk, after which the NYPD sent an undercover officer to follow Roussimoff around whenever he went out drinking in their city to make sure he did not fall on anyone again. Another story also says prior to his famous WrestleMania III match, Roussimoff drank 14 bottles of wine. An urban legend exists surrounding Roussimoff's 1987 surgery in which his size made it impossible for the anesthesiologist to estimate a dosage via standard methods; consequently, his alcohol tolerance was used as a guideline instead. Roussimoff had had severe pericardial effusion and had had a pericardiocentesis at Duke University Hospital in the 1980s. Death. Roussimoff died at age 46 of congestive heart failure and apparent heart attack in his sleep, likely associated to his untreated acromegaly, at a Paris hotel on the morning of Thursday 28 January 1993. He went to play cards with some friends on the night of Wednesday 27 January. He came back to his hotel room around 1 a.m. CET on 28 January. In the afternoon, Roussimoff was found dead in his room by hotel management and his chauffeur. He was in Paris to attend his father's funeral. While there, he decided to stay longer to be with his mother on her birthday. He spent the day before his death visiting and playing cards with some of his oldest friends in Molien. In his will, he specified that his remains should be cremated and "disposed of". Upon his death in Paris, his family in France held a funeral for him, intending to bury him near his father. When they learned of his wish to be cremated, his body was flown to the United States, where he was cremated according to his wishes. His ashes were scattered at his ranch () in Ellerbe, North Carolina. In addition, in accordance with his will, he left his estate to his sole beneficiary: his daughter Robin. Other media. Roussimoff made numerous appearances as himself in video games, starting with "WWF WrestleMania". He also appears posthumously in "Virtual Pro Wrestling 64", "WWF No Mercy", "Legends of Wrestling", "Legends of Wrestling II", "", "WWE SmackDown! vs. Raw", "WWE SmackDown! vs. Raw 2006", "WWE Legends of WrestleMania", "WWE All Stars", "WWE 2K14", "WWE 2K15", "WWE 2K16", "WWE 2K17", "WWE 2K18", "WWE 2K19", "WWE 2K20, WWE 2K Battlegrounds, WWE 2K22, WWE 2K23" and many others. In January 2005, WWE released "André The Giant", a DVD focusing on the life and career of Roussimoff. The DVD is a reissue of the out-of-print "André The Giant" VHS made by Coliseum Video in 1985, with commentary by Michael Cole and Tazz replacing Gorilla Monsoon and Jesse Ventura's commentary on his WrestleMania match with Big John Studd. The video is hosted by Lord Alfred Hayes. Later matches, including his battles against Hulk Hogan while a heel, are not included on this VHS.
2577
Adrastea (moon)
Adrastea (), also known as , is the second by distance, and the smallest of the four inner moons of Jupiter. It was discovered in photographs taken by "Voyager 2" in 1979, making it the first natural satellite to be discovered from images taken by an interplanetary spacecraft, rather than through a telescope. It was officially named after the mythological Adrasteia, foster mother of the Greek god Zeus—the equivalent of the Roman god Jupiter. Adrastea is one of the few moons in the Solar System known to orbit its planet in less than the length of that planet's day. It orbits at the edge of Jupiter's main ring and is thought to be the main contributor of material to the rings of Jupiter. Despite observations made in the 1990s by the "Galileo" spacecraft, very little is known about the moon's physical characteristics other than its size and the fact that it is tidally locked to Jupiter. Discovery and observations. Adrastea was discovered by David C. Jewitt and G. Edward Danielson in "Voyager 2" probe photographs taken on July 8, 1979, and received the designation . Although it appeared only as a dot, it was the first moon to be discovered by an interplanetary spacecraft. Soon after its discovery, two other of the inner moons of Jupiter (Thebe and Metis) were observed in the images taken a few months earlier by "Voyager 1". The "Galileo" spacecraft was able to determine the moon's shape in 1998, but the images remain poor. In 1983, Adrastea was officially named after the Greek nymph Adrastea, the daughter of Zeus and his lover Ananke. Although the "Juno" orbiter, which arrived at Jupiter in 2016, has a camera called JunoCam, it is almost entirely focused on observations of Jupiter itself. However, if all goes well, it should be able to capture some limited images of the moons Metis and Adrastea. Physical characteristics. Adrastea has an irregular shape and measures 20×16×14 km across. A surface area estimate would be between 840 and 1,600 (~1,200) km2. This makes it the smallest of the four inner moons. The bulk, composition and mass of Adrastea are not known, but assuming that its mean density is like that of Amalthea, around 0.86 g/cm3, its mass can be estimated at about 2 kg. Amalthea's density implies that the moon is composed of water ice with a porosity of 10–15%, and Adrastea may be similar. No surface details of Adrastea are known, due to the low resolution of available images. Orbit. Adrastea is the smallest and second-closest member of the inner Jovian satellite family. It orbits Jupiter at a radius of about (1.806 Jupiter radii) at the exterior edge of the planet's main ring. The orbit has very small eccentricity and inclination—around 0.0015 and 0.03°, respectively. Inclination is relative to the equator of Jupiter. Due to tidal locking, Adrastea rotates synchronously with its orbital period, keeping one face always looking toward the planet. Its long axis is aligned towards Jupiter, this being the lowest energy configuration. The orbit of Adrastea lies inside Jupiter's synchronous orbit radius (as does Metis's), and as a result, tidal forces are slowly causing its orbit to decay so that it will one day impact Jupiter. If its density is similar to Amalthea's then its orbit would actually lie within the fluid Roche limit. However, since it is not breaking up, it must still lie outside its rigid Roche limit. Adrastea is the second-fastest-moving of Jupiter's moons, with an orbital speed of 31.378 km/s. Relationship with Jupiter's rings. Adrastea is the largest contributor to material in Jupiter's rings. This appears to consist primarily of material that is ejected from the surfaces of Jupiter's four small inner satellites by meteorite impacts. It is easy for the impact ejecta to be lost from these satellites into space. This is due to the satellites' low density and their surfaces lying close to the edge of their Hill spheres. It seems that Adrastea is the most copious source of this ring material, as evidenced by the densest ring (the main ring) being located at and within Adrastea's orbit. More precisely, the orbit of Adrastea lies near the outer edge of Jupiter's main ring. The exact extent of visible ring material depends on the phase angle of the images: in forward-scattered light Adrastea is firmly outside the main ring, but in back-scattered light (which reveals much bigger particles) there appears to also be a narrow ringlet outside Adrastea's orbit. References. Cited sources
2578
Amalthea
Amalthea may refer to:
2580
Ananke (disambiguation)
Ananke is the Greek goddess of fate. Ananke may also refer to:
2581
Apache HTTP Server
The Apache HTTP Server ( ) is a free and open-source cross-platform web server software, released under the terms of Apache License 2.0. It is developed and maintained by a community of developers under the auspices of the Apache Software Foundation. The vast majority of Apache HTTP Server instances run on a Linux distribution, but current versions also run on Microsoft Windows, OpenVMS, and a wide variety of Unix-like systems. Past versions also ran on NetWare, OS/2 and other operating systems, including ports to mainframes. Originally based on the NCSA HTTPd server, development of Apache began in early 1995 after work on the NCSA code stalled. Apache played a key role in the initial growth of the World Wide Web, quickly overtaking NCSA HTTPd as the dominant HTTP server. In 2009, it became the first web server software to serve more than 100 million websites. , Netcraft estimated that Apache served 23.04% of the million busiest websites, while Nginx served 22.01%; Cloudflare at 19.53% and Microsoft Internet Information Services at 5.78% rounded out the top four. For some of Netcraft's other stats, Nginx is ahead of Apache. According to W3Techs' review of all web sites, in June 2022 Apache was ranked second at 31.4% and Nginx first at 33.6%, with Cloudflare Server third at 21.6%. Name. According to The Apache Software Foundation, its name was chosen "from respect for the various Native American nations collectively referred to as Apache, well-known for their superior skills in warfare strategy and their inexhaustible endurance". This was in a context in which it seemed that the open internet -- based on free exchange of open source code -- appeared to be soon subjected to a kind of conquer by proprietary software vendor Microsoft; Apache co-creator Brian Behlendorf -- originator of the name -- saw his effort somewhat parallel that of Geronimo, Chief of the last of the free Apache peoples. But it conceded that the name "also makes a cute pun on 'a patchy web server'—a server made from a series of patches". There are other sources for the "patchy" software pun theory, including the project's official documentation in 1995, which stated: "Apache is a cute name which stuck. It was based on some existing code and a series of software patches, a pun on 'A PAtCHy' server." But in an April 2000 interview, Behlendorf asserted that the origins of Apache were not a pun, stating: In January 2023, the US-based non-profit Natives in Tech accused the Apache Software Foundation of cultural appropriation and urged them to change the foundation's name, and consequently also the names of the software projects it hosts. When Apache is running under Unix, its process name is , which is short for "HTTP daemon". Feature overview. Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from authentication schemes to supporting server-side programming languages such as Perl, Python, Tcl and PHP. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest, the successor to mod_digest. A sample of other features include Secure Sockets Layer and Transport Layer Security support (mod_ssl), a proxy module (mod_proxy), a URL rewriting module (mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter). Popular compression methods on Apache include the external extension module, mod_gzip, implemented to help with reduction of the size (weight) of web pages served over HTTP. ModSecurity is an open source intrusion detection and prevention engine for Web applications. Apache logs can be analyzed through a Web browser using free scripts, such as AWStats/W3Perl or Visitors. Virtual hosting allows one Apache installation to serve many different websites. For example, one computer with one Apache installation could simultaneously serve codice_1, codice_2, codice_3, etc. Apache features configurable error messages, DBMS-based authentication databases, content negotiation and supports several graphical user interfaces (GUIs). It supports password authentication and digital certificate authentication. Because the source code is freely available, anyone can adapt the server for specific needs, and there is a large public library of Apache add-ons. A more detailed list of features is provided below: Performance. Instead of implementing a single architecture, Apache provides a variety of MultiProcessing Modules (MPMs), which allow it to run in either a process-based mode, a hybrid (process and thread) mode, or an event-hybrid mode, in order to better match the demands of each particular infrastructure. Choice of MPM and configuration is therefore important. Where compromises in performance must be made, Apache is designed to reduce latency and increase throughput relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames. For delivering static pages, Apache 2.2 series was considered significantly slower than nginx and varnish. To address this issue, the Apache developers created the Event MPM, which mixes the use of several processes and several threads per process in an asynchronous event-based loop. This architecture as implemented in the Apache 2.4 series performs at least as well as event-based web servers, according to Jim Jagielski and other independent sources. However, some independent but significantly outdated benchmarks show that it is still half as fast as nginx, e.g. Licensing. The Apache HTTP Server codebase was relicensed to the Apache 2.0 License (from the previous 1.1 license) in January 2004, and Apache HTTP Server 1.3.31 and 2.0.49 were the first releases using the new license. The OpenBSD project did not like the change and continued the use of pre-2.0 Apache versions, effectively forking Apache 1.3.x for its purposes. They initially replaced it with Nginx, and soon after made their own replacement, OpenBSD Httpd, based on the Relayd project. Versions. Version 1.1: The Apache License 1.1 was approved by the ASF in 2000: The primary change from the 1.0 license is in the 'advertising clause' (section 3 of the 1.0 license); derived products are no longer required to include attribution in their advertising materials, only in their documentation. Version 2.0: The ASF adopted the Apache License 2.0 in January 2004. The stated goals of the license included making the license easier for non-ASF projects to use, improving compatibility with GPL-based software, allowing the license to be included by reference instead of listed in every file, clarifying the license on contributions, and requiring a patent license on contributions that necessarily infringe a contributor's own patents. Development. The Apache HTTP Server Project is a collaborative software development effort aimed at creating a robust, commercial-grade, feature-rich and freely available source code implementation of an HTTP (Web) server. The project is jointly managed by a group of volunteers located around the world, using the Internet and the Web to communicate, plan, and develop the server and its related documentation. This project is part of the Apache Software Foundation. In addition, hundreds of users have contributed ideas, code, and documentation to the project. Apache 2.4 dropped support for BeOS, TPF, A/UX, NeXT, and Tandem platforms. Security. Apache, like other server software, can be hacked and exploited. The main Apache attack tool is Slowloris, which exploits a bug in Apache software. It creates many sockets and keeps each of them alive and busy by sending several bytes (known as "keep-alive headers") to let the server know that the computer is still connected and not experiencing network problems. The Apache developers have addressed Slowloris with several modules to limit the damage caused; the Apache modules mod_limitipconn, mod_qos, mod_evasive, mod security, mod_noloris, and mod_antiloris have all been suggested as means of reducing the likelihood of a successful Slowloris attack. Since Apache 2.2.15, Apache ships the module mod_reqtimeout as the official solution supported by the developers.
2582
Alph
Alph may refer to:
2583
Arbroath Abbey
Arbroath Abbey, in the Scottish town of Arbroath, was founded in 1178 by King William the Lion for a group of Tironensian Benedictine monks from Kelso Abbey. It was consecrated in 1197 with a dedication to the deceased Saint Thomas Becket, whom the king had met at the English court. It was William's only personal foundation — he was buried before the high altar of the church in 1214. The last Abbot was Cardinal David Beaton, who in 1522 succeeded his uncle James to become Archbishop of St Andrews. The Abbey is cared for by Historic Environment Scotland and is open to the public throughout the year (entrance charge). The distinctive red sandstone ruins stand at the top of the High Street in Arbroath. History. King William gave the Abbey independence from its founding abbey, Kelso Abbey, and endowed it generously, including income from 24 parishes, land in every royal burgh and more. The Abbey's monks were allowed to run a market and build a harbour. King John of England gave the Abbey permission to buy and sell goods anywhere in England (except London) toll-free. The Abbey, which was the richest in Scotland, is most famous for its association with the 1320 Declaration of Scottish Independence believed to have been drafted by Abbot Bernard, who was the Chancellor of Scotland under King Robert I. The Abbey fell into ruin after the Reformation. From 1590 onward, its stones were raided for buildings in the town of Arbroath. This continued until 1815 when steps were taken to preserve the remaining ruins. On Christmas Day 1950, the Stone of Destiny went missing from Westminster Abbey. On April 11, 1951, the stone was found lying on the site of the Abbey's altar. Since 1947, a major historical re-enactment commemorating the Declaration's signing has been held within the roofless remains of the Abbey church. The celebration is run by the local Arbroath Abbey Pageant Society, and tells the story of the events which led up to the signing. This is not an annual event. However, a special event to mark the signing is held every year on the 6th of April and involves a street procession and short piece of street theatre. In 2005 The Arbroath Abbey campaign was launched. The campaign seeks to gain World Heritage Status for the iconic Angus landmark that was the birthplace of one of Scotland's most significant document, The Declaration of Arbroath. Campaigners believe that the Abbey's historical pronouncement makes it a prime candidate to achieve World Heritage Status. MSP Alex Johnstone wrote "Clearly, the Declaration of Arbroath is a literary work of outstanding universal significance by any stretch of the imagination" In 2008, the Campaign Group Chairman, Councillor Jim Millar launched a public petition to reinforce the bid explaining "We're simply asking people to, local people especially, to sign up to the campaign to have the Declaration of Arbroath and Arbroath Abbey recognised by the United Nations. Essentially we need local people to sign up to this campaign simply because the United Nations demand it." Architectural description. The Abbey was built over some sixty years using local red sandstone, but gives the impression of a single coherent, mainly 'Early English' architectural design, though the round-arched processional doorway in the western front looks back to late Norman or transitional work. The triforium (open arcade) above the door is unique in Scottish medieval architecture. It is flanked by twin towers decorated with blind arcading. The cruciform church measured long by wide. What remains of it today are the sacristy, added by Abbot Paniter in the 15th century, the southern transept, which features Scotland's largest lancet windows, part of the choir and presbytery, the southern half of the nave, parts of the western towers and the western doorway. The church originally had a central tower and (probably) a spire. These would once have been visible from many miles over the surrounding countryside, and no doubt once acted as a sea mark for ships. The soft sandstone of the walls was originally protected by plaster internally and render externally. These coatings are long gone and much of the architectural detail is sadly eroded, though detached fragments found in the ruins during consolidation give an impression of the original refined, rather austere, architectural effect. The distinctive round window high in the south transept was originally lit up at night as a beacon for mariners. It is known locally as the 'Round O', and from this tradition inhabitants of Arbroath are colloquially known as 'Reid Lichties' (Scots reid = red). Little remains of the claustral buildings of the Abbey except for the impressive gatehouse, which stretches between the south-west corner of the church and a defensive tower on the High Street, and the still complete Abbot's House, a building of the 13th, 15th and 16th centuries, which is the best preserved of its type in Scotland. In the summer of 2001, a new visitors' centre was opened to the public beside the Abbey's west front. This red sandstone-clad building, with its distinctive 'wave-shaped' organic roof, planted with sedum, houses displays on the history of the Abbey and some of the best surviving stonework and other relics. The upper storey features a scale model of the Abbey complex, a computer-generated 'fly-through' reconstruction of the church as it was when complete, and a viewing gallery with excellent views of the ruins. The centre won the 2002 Angus Design Award. An archaeological investigation of the site of the visitors' centre before building started revealed the foundations of the medieval precinct wall, with a gateway, and stonework discarded during manufacture, showing that the area was the site of the masons' yard while the Abbey was being built.
2593
Accounting
Accounting, also known as accountancy, is the measurement, processing, and communication of financial and non-financial information about economic entities such as businesses and corporations. Accounting, which has been called the "language of business", measures the results of an organization's economic activities and conveys this information to a variety of stakeholders, including investors, creditors, management, and regulators. Practitioners of accounting are known as accountants. The terms "accounting" and "financial reporting" are often used as synonyms. Accounting can be divided into several fields including financial accounting, management accounting, tax accounting and cost accounting. Financial accounting focuses on the reporting of an organization's financial information, including the preparation of financial statements, to the external users of the information, such as investors, regulators and suppliers. Management accounting focuses on the measurement, analysis and reporting of information for internal use by management. The recording of financial transactions, so that summaries of the financials may be presented in financial reports, is known as bookkeeping, of which double-entry bookkeeping is the most common system. Accounting information systems are designed to support accounting functions and related activities. Accounting has existed in various forms and levels of sophistication throughout human history. The double-entry accounting system in use today was developed in medieval Europe, particularly in Venice, and is usually attributed to the Italian mathematician and Franciscan friar Luca Pacioli. Today, accounting is facilitated by such as standard-setters, accounting firms and professional bodies. Financial statements are usually audited by accounting firms, and are prepared in accordance with generally accepted accounting principles (GAAP). GAAP is set by various standard-setting organizations such as the Financial Accounting Standards Board (FASB) in the United States and the Financial Reporting Council in the United Kingdom. As of 2012, "all major economies" have plans to converge towards or adopt the International Financial Reporting Standards (IFRS). History. Accounting is thousands of years old and can be traced to ancient civilizations. The early development of accounting dates back to ancient Mesopotamia, and is closely related to developments in writing, counting and money; there is also evidence of early forms of bookkeeping in ancient Iran, and early auditing systems by the ancient Egyptians and Babylonians. By the time of Emperor Augustus, the Roman government had access to detailed financial information. Double-entry bookkeeping was pioneered in the Jewish community of the early-medieval Middle East and was further refined in medieval Europe. With the development of joint-stock companies, accounting split into financial accounting and management accounting. The first published work on a double-entry bookkeeping system was the "Summa de arithmetica", published in Italy in 1494 by Luca Pacioli (the "Father of Accounting"). Accounting began to transition into an organized profession in the nineteenth century, with local professional bodies in England merging to form the Institute of Chartered Accountants in England and Wales in 1880. Etymology. Both the words accounting and accountancy were in use in Great Britain by the mid-1800s, and are derived from the words "accompting" and "accountantship" used in the 18th century. In Middle English (used roughly between the 12th and the late 15th century) the verb "to account" had the form "accounten", which was derived from the Old French word "aconter", which is in turn related to the Vulgar Latin word "computare", meaning "to reckon". The base of "computare" is "putare", which "variously meant to prune, to purify, to correct an account, hence, to count or calculate, as well as to think". The word "accountant" is derived from the French word , which is also derived from the Italian and Latin word . The word was formerly written in English as "accomptant", but in process of time the word, which was always pronounced by dropping the "p", became gradually changed both in pronunciation and in orthography to its present form. Terminology. Accounting has variously been defined as the keeping or preparation of the financial records of transactions of the firm, the analysis, verification and reporting of such records and "the principles and procedures of accounting"; it also refers to the job of being an accountant. Accountancy refers to the occupation or profession of an accountant, particularly in British English. Topics. Accounting has several subfields or subject areas, including financial accounting, management accounting, auditing, taxation and accounting information systems. Financial accounting. Financial accounting focuses on the reporting of an organization's financial information to external users of the information, such as investors, potential investors and creditors. It calculates and records business transactions and prepares financial statements for the external users in accordance with generally accepted accounting principles (GAAP). GAAP, in turn, arises from the wide agreement between accounting theory and practice, and change over time to meet the needs of decision-makers. Financial accounting produces past-oriented reports—for example financial statements are often published six to ten months after the end of the accounting period—on an annual or quarterly basis, generally about the organization as a whole. Management accounting. Management accounting focuses on the measurement, analysis and reporting of information that can help managers in making decisions to fulfill the goals of an organization. In management accounting, internal measures and reports are based on cost-benefit analysis, and are not required to follow the generally accepted accounting principle (GAAP). In 2014 CIMA created the Global Management Accounting Principles (GMAPs). The result of research from across 20 countries in five continents, the principles aim to guide best practice in the discipline. Management accounting produces past-oriented reports with time spans that vary widely, but it also encompasses future-oriented reports such as budgets. Management accounting reports often include financial and non financial information, and may, for example, focus on specific products and departments. Auditing. Auditing is the verification of assertions made by others regarding a payoff, and in the context of accounting it is the "unbiased examination and evaluation of the financial statements of an organization". Audit is a professional service that is systematic and conventional. An audit of financial statements aims to express or disclaim an independent opinion on the financial statements. The auditor expresses an independent opinion on the fairness with which the financial statements presents the financial position, results of operations, and cash flows of an entity, in accordance with the generally accepted accounting principles (GAAP) and "in all material respects". An auditor is also required to identify circumstances in which the generally accepted accounting principles (GAAP) have not been consistently observed. Information systems. An accounting information system is a part of an organization's information system used for processing accounting data. Many corporations use artificial intelligence-based information systems. The banking and finance industry uses AI in fraud detection. The retail industry uses AI for customer services. AI is also used in the cybersecurity industry. It involves computer hardware and software systems using statistics and modeling. Many accounting practices have been simplified with the help of accounting computer-based software. An enterprise resource planning (ERP) system is commonly used for a large organisation and it provides a comprehensive, centralized, integrated source of information that companies can use to manage all major business processes, from purchasing to manufacturing to human resources. These systems can be cloud based and available on demand via application or browser, or available as software installed on specific computers or local servers, often referred to as on-premise. Tax accounting. Tax accounting in the United States concentrates on the preparation, analysis and presentation of tax payments and tax returns. The U.S. tax system requires the use of specialised accounting principles for tax purposes which can differ from the generally accepted accounting principles (GAAP) for financial reporting. U.S. tax law covers four basic forms of business ownership: sole proprietorship, partnership, corporation, and limited liability company. Corporate and personal income are taxed at different rates, both varying according to income levels and including varying marginal rates (taxed on each additional dollar of income) and average rates (set as a percentage of overall income). Forensic accounting. Forensic accounting is a specialty practice area of accounting that describes engagements that result from actual or anticipated disputes or litigation. "Forensic" means "suitable for use in a court of law", and it is to that standard and potential outcome that forensic accountants generally have to work. Political campaign accounting. Political campaign accounting deals with the development and implementation of financial systems and the accounting of financial transactions in compliance with laws governing political campaign operations. This branch of accounting was first formally introduced in the March 1976 issue of "The Journal of Accountancy". Organizations. Professional bodies. Professional accounting bodies include the American Institute of Certified Public Accountants (AICPA) and the other 179 members of the International Federation of Accountants (IFAC), including Institute of Chartered Accountants of Scotland (ICAS), Institute of Chartered Accountants of Pakistan (ICAP), CPA Australia, Institute of Chartered Accountants of India, Association of Chartered Certified Accountants (ACCA) and Institute of Chartered Accountants in England and Wales (ICAEW). Some countries have a single professional accounting body and, in some other countries, professional bodies for subfields of the accounting professions also exist, for example the Chartered Institute of Management Accountants (CIMA) in the UK and Institute of management accountants in the United States. Many of these professional bodies offer education and training including qualification and administration for various accounting designations, such as certified public accountant (AICPA) and chartered accountant. Firms. Depending on its size, a company may be legally required to have their financial statements audited by a qualified auditor, and audits are usually carried out by accounting firms. Accounting firms grew in the United States and Europe in the late nineteenth and early twentieth century, and through several mergers there were large international accounting firms by the mid-twentieth century. Further large mergers in the late twentieth century led to the dominance of the auditing market by the "Big Five" accounting firms: Arthur Andersen, Deloitte, Ernst & Young, KPMG and PricewaterhouseCoopers. The demise of Arthur Andersen following the Enron scandal reduced the Big Five to the Big Four. Standard-setters. Generally accepted accounting principles (GAAP) are accounting standards issued by national regulatory bodies. In addition, the International Accounting Standards Board (IASB) issues the International Financial Reporting Standards (IFRS) implemented by 147 countries. Standards for international audit and assurance, ethics, education, and public sector accounting are all set by independent standard settings boards supported by IFAC. The International Auditing and Assurance Standards Board sets international standards for auditing, assurance, and quality control; the International Ethics Standards Board for Accountants (IESBA) sets the internationally appropriate principles-based "Code of Ethics for Professional Accountants"; the International Accounting Education Standards Board (IAESB) sets professional accounting education standards; and International Public Sector Accounting Standards Board (IPSASB) sets accrual-based international public sector accounting standards. Organizations in individual countries may issue accounting standards unique to the countries. For example, in Australia, the Australian Accounting Standards Board manages the issuance of the accounting standards in line with IFRS. In the United States the Financial Accounting Standards Board (FASB) issues the Statements of Financial Accounting Standards, which form the basis of US GAAP, and in the United Kingdom the Financial Reporting Council (FRC) sets accounting standards. However, as of 2012 "all major economies" have plans to converge towards or adopt the IFRS. Education, training and qualifications. Degrees. At least a bachelor's degree in accounting or a related field is required for most accountant and auditor job positions, and some employers prefer applicants with a master's degree. A degree in accounting may also be required for, or may be used to fulfill the requirements for, membership to professional accounting bodies. For example, the education during an accounting degree can be used to fulfill the American Institute of CPA's (AICPA) 150 semester hour requirement, and associate membership with the Certified Public Accountants Association of the UK is available after gaining a degree in finance or accounting. A doctorate is required in order to pursue a career in accounting academia, for example, to work as a university professor in accounting. The Doctor of Philosophy (PhD) and the Doctor of Business Administration (DBA) are the most popular degrees. The PhD is the most common degree for those wishing to pursue a career in academia, while DBA programs generally focus on equipping business executives for business or public careers requiring research skills and qualifications. Professional qualifications. Professional accounting qualifications include the Chartered Accountant designations and other qualifications including certificates and diplomas. In Scotland, chartered accountants of ICAS undergo Continuous Professional Development and abide by the ICAS code of ethics. In England and Wales, chartered accountants of the ICAEW undergo annual training, and are bound by the ICAEW's code of ethics and subject to its disciplinary procedures. In the United States, the requirements for joining the AICPA as a Certified Public Accountant are set by the Board of Accountancy of each state, and members agree to abide by the AICPA's Code of Professional Conduct and Bylaws. The ACCA is the largest global accountancy body with over 320,000 members, and the organisation provides an 'IFRS stream' and a 'UK stream'. Students must pass a total of 14 exams, which are arranged across three levels. Research. Accounting research is research in the effects of economic events on the process of accounting, the effects of reported information on economic events, and the roles of accounting in organizations and society. It encompasses a broad range of research areas including financial accounting, management accounting, auditing and taxation. Accounting research is carried out both by academic researchers and practicing accountants. Methodologies in academic accounting research include archival research, which examines "objective data collected from repositories"; experimental research, which examines data "the researcher gathered by administering treatments to subjects"; analytical research, which is "based on the act of formally modeling theories or substantiating ideas in mathematical terms"; interpretive research, which emphasizes the role of language, interpretation and understanding in accounting practice, "highlighting the symbolic structures and taken-for-granted themes which pattern the world in distinct ways"; critical research, which emphasizes the role of power and conflict in accounting practice; case studies; computer simulation; and field research. Empirical studies document that leading accounting journals publish in total fewer research articles than comparable journals in economics and other business disciplines, and consequently, accounting scholars are relatively less successful in academic publishing than their business school peers. Due to different publication rates between accounting and other business disciplines, a recent study based on academic author rankings concludes that the competitive value of a single publication in a top-ranked journal is highest in accounting and lowest in marketing. Scandals. The year 2001 witnessed a series of financial information frauds involving Enron, auditing firm Arthur Andersen, the telecommunications company WorldCom, Qwest and Sunbeam, among other well-known corporations. These problems highlighted the need to review the effectiveness of accounting standards, auditing regulations and corporate governance principles. In some cases, management manipulated the figures shown in financial reports to indicate a better economic performance. In others, tax and regulatory incentives encouraged over-leveraging of companies and decisions to bear extraordinary and unjustified risk. The Enron scandal deeply influenced the development of new regulations to improve the reliability of financial reporting, and increased public awareness about the importance of having accounting standards that show the financial reality of companies and the objectivity and independence of auditing firms. In addition to being the largest bankruptcy reorganization in American history, the Enron scandal undoubtedly is the biggest audit failure causing the dissolution of Arthur Andersen, which at the time was one of the five largest accounting firms in the world. After a series of revelations involving irregular accounting procedures conducted throughout the 1990s, Enron filed for Chapter 11 bankruptcy protection in December 2001. One consequence of these events was the passage of the Sarbanes–Oxley Act in the United States in 2002, as a result of the first admissions of fraudulent behavior made by Enron. The act significantly raises criminal penalties for securities fraud, for destroying, altering or fabricating records in federal investigations or any scheme or attempt to defraud shareholders. Fraud and error. Accounting fraud is an intentional misstatement or omission in the accounting records by management or employees which involves the use of deception. It is a criminal act and a breach of civil tort. It may involve collusion with third parties. An accounting error is an unintentional misstatement or omission in the accounting records, for example misinterpretation of facts, mistakes in processing data, or oversights leading to incorrect estimates. Acts leading to accounting errors are not criminal but may breach civil law, for example, the tort of negligence. The primary responsibility for the prevention and detection of fraud and errors rests with the entity's management.