id
stringlengths
2
8
url
stringlengths
31
210
title
stringlengths
1
130
text
stringlengths
63
398k
2237
https://en.wikipedia.org/wiki/Steel-string%20acoustic%20guitar
Steel-string acoustic guitar
The steel-string acoustic guitar is a modern form of guitar that descends from the gut-strung Romantic guitar, but is strung with steel strings for a brighter, louder sound. Like the modern classical guitar, it is often referred to simply as an acoustic guitar, or sometimes as a folk guitar. The most common type is often called a flat top guitar, to distinguish it from the more specialized archtop guitar and other variations. The standard tuning for an acoustic guitar is E-A-D-G-B-E (low to high), although many players, particularly fingerpickers, use alternate tunings (scordatura), such as open G (D-G-D-G-B-D), open D (D-A-D-F-A-D), drop D (D-A-D-G-B-E), or D-A-D-G-A-D (particularly in Irish traditional music). Construction Steel-string guitars vary in construction and materials. Different woods and approach to bracing affect the instrument's timbre or tone. While there is little scientific evidence, many players and luthiers believe a well-made guitar's tone improves over time. They theorize that a decrease in the content of hemicellulose, crystallization of cellulose, and changes to lignin over time all result in its wood gaining better resonating properties. Types Steel-string acoustic guitars are commonly constructed in several body types, varying in size, depth, and proportion. In general, the guitar's soundbox can be thought of as composed of two mating chambers: the upper bouts (a bout being the rounded corner of an instrument body) on the neck end of the body, and lower bouts (on the bridge end). These meet at the waist, or the narrowest part of the body face near the soundhole. The proportion and overall size of these two parts helps determine the overall tonal balance and "native sound" of a particular body style – the larger the body, the louder the volume. The parlor, 00, double-O, or grand concert body type is the major body style most directly derived from the classical guitar. It has the thinnest soundbox and the smallest overall size, making it very comfortable to play but lacking in volume projection relative to the larger types. Its smaller size makes it suitable for younger or smaller-framed players. It is well-suited to smaller rooms. Martin's 00-xxx series and Taylor's x12 series are common examples. The grand auditorium guitar, sometimes called the 000 or the triple-O is very similar in design to the grand concert, but slightly wider and deeper. Many 000-style guitars also have a convex back to increase the physical volume of the soundbox without making it deeper at the edges, which would affect comfort and playability. The result is a very balanced tone, comparable to the 00 but with greater volume and dynamic range and slightly more low-end response, making this body style very popular. Eric Clapton's signature Martin, for example, is of this style. Martin's 000-xxx series and Taylor's x14 series are well-known examples of the grand auditorium style. The dreadnought is a large-bodied guitar which incorporates a deeper soundbox, but a smaller and less-pronounced upper bout than most styles. Its size and power gave rise to its name, taken from the most formidable class of warship at the time of its creation in the early 20th century. The style was designed by Martin Guitars to produce a deeper sound than "classic"-style guitars, with very resonant bass. Its body's combination of compact profile with a deep sound has since been copied by virtually every major steel-string luthier, making it the most popular body type. Martin's "D" series guitars, such as the highly prized D-28, are classic examples of the dreadnought. The jumbo body type is bigger again than a grand auditorium but similarly proportioned, and is generally designed to provide a deep tone similar to a dreadnought's. It was designed by Gibson to compete with the dreadnought, but with maximum resonant space for greater volume and sustain. These come at the expense of being oversized, with a very deep sounding box, and thus somewhat more difficult to play. The foremost example of the style is the Gibson J-200, but like the dreadnought, most guitar manufacturers have at least one jumbo model. Any of these body type can incorporate a cutaway, where a section of the upper bout below the neck is scalloped out. This allows for easier access to the frets located atop the soundbox, at the expense of reduced soundbox volume and altered bracing, which can affect the resonant qualities and resulting tone of the instrument. All of these relatively traditional looking and constructed instruments are commonly referred to as flattop guitars. All are commonly used in popular music genres, including rock, blues, country, and folk. Other styles of guitar which enjoy moderate popularity, generally in more specific genres, include: The archtop, which incorporates an arched, violin-like top either carved out of solid wood or heat-pressed using laminations. It usually has violin style f-holes rather than a single round sound hole. It is most commonly used by swing and jazz players and often incorporates an electric pickup. The Selmer-Maccaferri guitar is usually played by those who follow the style of Django Reinhardt. It is an unusual-looking instrument, distinguished by a fairly large body with squarish bouts, and either a D-shaped or longitudinal oval soundhole. The strings are gathered at the tail like an archtop guitar, but the top is flatter. It also has a wide fingerboard and slotted head like a nylon-string guitar. The loud volume and penetrating tone make it suitable for single-note soloing, and it is frequently employed as a lead instrument in gypsy swing. The resonator guitar, also called the Dobro after its most prominent manufacturer, amplifies its sound through one or more metal cone-shaped resonators. It was designed to overcome the problem of conventional acoustic guitars being overwhelmed by horns and percussion instruments in dance orchestras. It became prized for its distinctive sound, however, and gained a place in several musical styles (most notably blues and bluegrass), and retains a niche well after the proliferation of electric amplification. The 12-string guitar replaces each string with a course of two strings. The lower pairs are tuned an octave apart. Its unique sound was made famous by artists such as Lead Belly, Pete Seeger and Leo Kottke. Tonewoods Traditionally, steel-string guitars have been made of a combination of various tonewoods, or woods considered to have pleasing resonant qualities when used in instrument-making. The term is ill-defined and the wood species that are considered tonewoods have evolved throughout history. Foremost for making steel-string guitar tops are Sitka spruce, the most common, and Alpine and Adirondack spruce. The back and sides of a particular guitar are typically made of the same wood; Brazilian rosewood, East Indian rosewood, and Honduras mahogany are traditional choices, however, maple has been prized for the figuring that can be seen when it is cut in a certain way (such as flame and quilt patterns). A common non-traditional wood gaining popularity is sapele, which is tonally similar to mahogany but slightly lighter in color and possessing a deep grain structure that is visually appealing. Due to decreasing availability and rising prices of premium-quality traditional tonewoods, many manufacturers have begun experimenting with alternative species of woods or more commonly available variations on the standard species. For example, some makers have begun producing models with red cedar or mahogany tops, or with spruce variants other than Sitka. Cedar is also common in the back and sides, as is basswood. Entry-level models, especially those made in East Asia, often use nato wood, which is again tonally similar to mahogany but is cheap to acquire. Some have also begun using non-wood materials, such as plastic or graphite. Carbon-fiber and phenolic composite materials have become desirable for building necks, and some high-end luthiers produce all-carbon-fiber guitars. Assembly The steel-string acoustic guitar evolved from the gut-string Romantic guitar, and because steel strings have higher tension, heavier construction is required overall. One innovation is a metal bar called a truss rod, which is incorporated into the neck to strengthen it and provide adjustable counter-tension to the stress of the strings. Typically, a steel-string acoustic guitar is built with a larger soundbox than a standard classical guitar. A critical structural and tonal component of an acoustic guitar is the bracing, a systems of struts glued to the inside of the back and top. Steel-string guitars use different bracing systems from classical guitars, typically using X-bracing instead of fan bracing. (Another simpler system, called ladder bracing, where the braces are all placed across the width of the instrument, is used on all types of flat-top guitars on the back.) Innovations in bracing design have emerged, notably the A-brace developed by British luthier Roger Bucknall of Fylde Guitars. Most luthiers and experienced players agree that a good solid top (as opposed to laminated or plywood) is the most important factor in the tone of the guitar. Solid backs and sides can also contribute to a pleasant sound, although laminated sides and backs are acceptable alternatives, commonly found in mid-level guitars (in the range of US$300–$1000). From the 1960s through the 1980s, "by far the most significant developments in the design and construction of acoustic guitars" were made by the Ovation Guitar Company. It introduced a composite roundback bowl, which replaced the square back and sides of traditional guitars; because of its engineering design, Ovation guitars could be amplified without producing the obnoxious feedback that had plagued acoustic guitars before. Ovation also pioneered with electronics, such as pickup systems and electronic tuners. Amplification A steel-string guitar can be using any of three techniques: a microphone, possibly clipped to the guitar body; a detachable pickup, often straddling the soundhole and using the same magnetic principle as a traditional electric guitar; or a transducer built into the body. The last type of guitar is commonly called an acoustic-electric guitar as it can be played either "unplugged" as an acoustic, or plugged in as an electric. The most common type is a piezoelectric pickup, which is composed of a thin sandwich of quartz crystal. When compressed, the crystal produces a small electric current, so when placed under the bridge saddle, the vibrations of the strings through the saddle, and of the body of the instrument, are converted to a weak electrical signal. This signal is often sent to a pre-amplifier, which increases the signal strength and normally incorporates an equalizer. The output of the preamplifier then goes to a separate amplifier system similar to that for an electric guitar. Several manufacturers produce specialised acoustic guitar amplifiers, which are designed to give undistorted and full-range reproduction. Music and players Until the 1960s, the predominant forms of music played on the flat-top, steel-string guitar remained relatively stable and included acoustic blues, country, bluegrass, folk, and several genres of rock. The concept of playing solo steel-string guitar in a concert setting was introduced in the early 1960s by such performers as Davey Graham and John Fahey, who used country blues fingerpicking techniques to compose original compositions with structures somewhat like European classical music. Fahey contemporary Robbie Basho added elements of Indian classical music and Leo Kottke used a Faheyesque approach to make the first solo steel-string guitar "hit" record. Steel-string guitars are also important in the world of flatpicking, as utilized by such artists as Clarence White, Tony Rice, Bryan Sutton, Doc Watson and David Grier. Luthiers have been experimenting with redesigning the acoustic guitar for these players. These flat-top, steel-string guitars are constructed and voiced more for classical-like fingerpicking and less for chordal accompaniment (strumming). Some luthiers have increasingly focused their attention on the needs of fingerstylists and have developed unique guitars for this style of playing. Many other luthiers attempt to recreate the guitars of the "Golden Era" of C.F. Martin & Co. This was started by Roy Noble, who built the guitar played by Clarence White from 1968 to 1972, and was followed by Bill Collings, Marty Lanham, Dana Bourgeois, Randy Lucas, Lynn Dudenbostel and Wayne Henderson, a few of the luthiers building guitars today inspired by vintage Martins, the pre–World War II models in particular. As prices for vintage Martins continue to rise exponentially, upscale guitar enthusiasts have demanded faithful recreations and luthiers are working to fill that demand. See also Guitar List of guitar manufacturers Dingulator Strumming References Acoustic guitars American musical instruments Rhythm section
2250
https://en.wikipedia.org/wiki/Abiotic%20stress
Abiotic stress
Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way. Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors. Examples Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly. The most basic stressors include: High winds Extreme temperatures Drought Flood Other natural disasters, such as tornadoes and wildfires. Cold Heat Nutrient deficiency Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination. Effects Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress will have. The higher the latitude of the area affected, the greater the impact of abiotic stress will be on that area. So, a taiga or boreal forest is at the mercy of whatever abiotic stress factors may come along, while tropical zones are much less susceptible to such stressors. Benefits One example of a situation where abiotic stress plays a constructive role in an ecosystem is in natural wildfires. While they can be a human safety hazard, it is productive for these ecosystems to burn out every once in a while so that new organisms can begin to grow and thrive. Even though it is healthy for an ecosystem, a wildfire can still be considered an abiotic stressor, because it puts an obvious stress on individual organisms within the area. Every tree that is scorched and each bird nest that is devoured is a sign of the abiotic stress. On the larger scale, though, natural wildfires are positive manifestations of abiotic stress. What also needs to be taken into account when looking for benefits of abiotic stress, is that one phenomenon may not affect an entire ecosystem in the same way. While a flood will kill most plants living low on the ground in a certain area, if there is rice there, it will thrive in the wet conditions. Another example of this is in phytoplankton and zooplankton. The same types of conditions are usually considered stressful for these two types of organisms. They act very similarly when exposed to ultraviolet light and most toxins, but at elevated temperatures the phytoplankton reacts negatively, while the thermophilic zooplankton reacts positively to the increase in temperature. The two may be living in the same environment, but an increase in temperature of the area would prove stressful only for one of the organisms. Lastly, abiotic stress has enabled species to grow, develop, and evolve, furthering natural selection as it picks out the weakest of a group of organisms. Both plants and animals have evolved mechanisms allowing them to survive extremes. Detriments The most obvious detriment concerning abiotic stress involves farming. It has been claimed by one study that abiotic stress causes the most crop loss of any other factor and that most major crops are reduced in their yield by more than 50% from their potential yield. Because abiotic stress is widely considered a detrimental effect, the research on this branch of the issue is extensive. For more information on the harmful effects of abiotic stress, see the sections below on plants and animals. In plants A plant's first line of defense against abiotic stress is in its roots. If the soil holding the plant is healthy and biologically diverse, the plant will have a higher chance of surviving stressful conditions. The plant responses to stress are dependent on the tissue or organ affected by the stress. For example, transcriptional responses to stress are tissue or cell specific in roots and are quite different depending on the stress involved. One of the primary responses to abiotic stress such as high salinity is the disruption of the Na+/K+ ratio in the cytoplasm of the plant cell. High concentrations of Na+, for example, can decrease the capacity for the plant to take up water and also alter enzyme and transporter functions. Evolved adaptations to efficiently restore cellular ion homeostasis have led to a wide variety of stress tolerant plants. Facilitation, or the positive interactions between different species of plants, is an intricate web of association in a natural environment. It is how plants work together. In areas of high stress, the level of facilitation is especially high as well. This could possibly be because the plants need a stronger network to survive in a harsher environment, so their interactions between species, such as cross-pollination or mutualistic actions, become more common to cope with the severity of their habitat. Plants also adapt very differently from one another, even from a plant living in the same area. When a group of different plant species was prompted by a variety of different stress signals, such as drought or cold, each plant responded uniquely. Hardly any of the responses were similar, even though the plants had become accustomed to exactly the same home environment. Serpentine soils (media with low concentrations of nutrients and high concentrations of heavy metals) can be a source of abiotic stress. Initially, the absorption of toxic metal ions is limited by cell membrane exclusion. Ions that are absorbed into tissues are sequestered in cell vacuoles. This sequestration mechanism is facilitated by proteins on the vacuole membrane. An example of plants that adapt to serpentine soil are Metallophytes, or hyperaccumulators, as they are known for their ability to absorbed heavy metals using the root-to-shoot translocation (which it will absorb into shoots rather than the plant itself). They're also extinguished for their ability to absorb toxic substances from heavy metals. Chemical priming has been proposed to increase tolerance to abiotic stresses in crop plants. In this method, which is analogous to vaccination, stress-inducing chemical agents are introduced to the plant in brief doses so that the plant begins preparing defense mechanisms. Thus, when the abiotic stress occurs, the plant has already prepared defense mechanisms that can be activated faster and increase tolerance. Prior exposure to tolerable doses of biotic stresses such as phloem-feeding insect infestation have also been shown to increase tolerance to abiotic stresses in plant Impact on food production Abiotic stress mostly affects plants used in agriculture. Some examples of adverse conditions (which may be caused by climate change) are high or low temperatures, drought, salinity, and toxins. Rice (Oryza sativa) is a classic example. Rice is a staple food throughout the world, especially in China and India. Rice plants can undergo different types of abiotic stresses, like drought and high salinity. These stress conditions adversely affect rice production. Genetic diversity has been studied among several rice varieties with different genotypes, using molecular markers. Chickpea production is affected by drought. Chickpeas are one of the most important foods in the world. Wheat is another major crop that is affected by drought: lack of water affects the plant development, and can wither the leaves. Maize crops can be affected by high temperature and drought, leading to the loss of maize crops due to poor plant development. Soybean is a major source of protein, and its production is also affected by drought. Salt stress in plants Soil salinization, the accumulation of water-soluble salts to levels that negatively impact plant production, is a global phenomenon affecting approximately 831 million hectares of land. More specifically, the phenomenon threatens 19.5% of the world's irrigated agricultural land and 2.1% of the world's non-irrigated (dry-land) agricultural lands. High soil salinity content can be harmful to plants because water-soluble salts can alter osmotic potential gradients and consequently inhibit many cellular functions. For example, high soil salinity content can inhibit the process of photosynthesis by limiting a plant's water uptake; high levels of water-soluble salts in the soil can decrease the osmotic potential of the soil and consequently decrease the difference in water potential between the soil and the plant's roots, thereby limiting electron flow from H2O to P680 in Photosystem II's reaction center. Over generations, many plants have mutated and built different mechanisms to counter salinity effects. A good combatant of salinity in plants is the hormone ethylene. Ethylene is known for regulating plant growth and development and dealing with stress conditions. Many central membrane proteins in plants, such as ETO2, ERS1 and EIN2, are used for ethylene signaling in many plant growth processes. Mutations in these proteins can lead to heightened salt sensitivity and can limit plant growth. The effects of salinity has been studied on Arabidopsis plants that have mutated ERS1, ERS2, ETR1, ETR2 and EIN4 proteins. These proteins are used for ethylene signaling against certain stress conditions, such as salt and the ethylene precursor ACC is used to suppress any sensitivity to the salt stress. Phosphate starvation in plants Phosphorus (P) is an essential macronutrient required for plant growth and development, but it is present only in limited quantities in most of the world's soil. Plants use P mainly in the form of soluble inorganic phosphates (PO4−−−) but are subject to abiotic stress when there is not enough soluble PO4−−− in the soil. Phosphorus forms insoluble complexes with Ca and Mg in alkaline soils and with Al and Fe in acidic soils that make the phosphorus unavailable for plant roots. When there is limited bioavailable P in the soil, plants show extensive symptoms of abiotic stress, such as short primary roots and more lateral roots and root hairs to make more surface available for phosphate absorption, exudation of organic acids and phosphatase to release phosphates from complex P–containing molecules and make it available for growing plants' organs. It has been shown that PHR1, a MYB-related transcription factor, is a master regulator of P-starvation response in plants. PHR1 also has been shown to regulate extensive remodeling of lipids and metabolites during phosphorus limitation stress Drought stress Drought stress, defined as naturally occurring water deficit, is a main cause of crop losses in agriculture. This is because water is essential for many fundamental processes in plant growth. It has become especially important in recent years to find a way to combat drought stress. A decrease in precipitation and consequent increase in drought are extremely likely in the future due to an increase in global warming. Plants have come up with many mechanisms and adaptations to try and deal with drought stress. One of the leading ways that plants combat drought stress is by closing their stomata. A key hormone regulating stomatal opening and closing is abscisic acid (ABA). Synthesis of ABA causes the ABA to bind to receptors. This binding then affects the opening of ion channels, thereby decreasing turgor pressure in the stomata and causing them to close. Recent studies, by Gonzalez-Villagra, et al., have showed how ABA levels increased in drought-stressed plants (2018). They showed that when plants were placed in a stressful situation they produced more ABA to try to conserve any water they had in their leaves. Another extremely important factor in dealing with drought stress and regulating the uptake and export of water is aquaporins (AQPs). AQPs are integral membrane proteins that make up channels. These channels' main job is the transport of water and other essential solutes. AQPs are both transcriptionally and post-transcriptionally regulated by many different factors such as ABA, GA3, pH and Ca2+; and the specific levels of AQPs in certain parts of the plant, such as roots or leaves, helps to draw as much water into the plant as possible. By understanding the mechanisms of both AQPs and the hormone ABA, scientists will be better able to produce drought-resistant plants in the future. It is interesting that plants that are consistently exposed to drought have been found to form a sort of "memory". A study by Tombesi et al., found that plants which had previously been exposed to drought were able to come up with a sort of strategy to minimize water loss and decrease water use. They found that plants which were exposed to drought conditions actually changed the way they regulated their stomata and what they called "hydraulic safety margin" so as to decrease the vulnerability of the plant. By changing the regulation of stomata and subsequently the transpiration, plants were able to function better when less water was available. In animals For animals, the most stressful of all the abiotic stressors is heat. This is because many species are unable to regulate their internal body temperature. Even in the species that are able to regulate their own temperature, it is not always a completely accurate system. Temperature determines metabolic rates, heart rates, and other very important factors within the bodies of animals, so an extreme temperature change can easily distress the animal's body. Animals can respond to extreme heat, for example, through natural heat acclimation or by burrowing into the ground to find a cooler space. It is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. This acts as a sort of stock room when a species is plagued by the perils of natural selection. A variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor. In endangered species Biodiversity is determined by many things, and one of them is abiotic stress. If an environment is highly stressful, biodiversity tends to be low. If abiotic stress does not have a strong presence in an area, the biodiversity will be much higher. This idea leads into the understanding of how abiotic stress and endangered species are related. It has been observed through a variety of environments that as the level of abiotic stress increases, the number of species decreases. This means that species are more likely to become population threatened, endangered, and even extinct, when and where abiotic stress is especially harsh. See also Ecophysiology References Stress (biological and psychological) Biodiversity Habitat Agriculture Botany
2274
https://en.wikipedia.org/wiki/Arthur%20Eddington
Arthur Eddington
Sir Arthur Stanley Eddington (28 December 1882 – 22 November 1944) was an English astronomer, physicist, and mathematician. He was also a philosopher of science and a populariser of science. The Eddington limit, the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object, is named in his honour. Around 1920, he foreshadowed the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington was the first to correctly speculate that the source was fusion of hydrogen into helium. Eddington wrote a number of articles that announced and explained Einstein's theory of general relativity to the English-speaking world. World War I had severed many lines of scientific communication, and new developments in German science were not well known in England. He also conducted an expedition to observe the solar eclipse of 29 May 1919 on the Island of Principe that provided one of the earliest confirmations of general relativity, and he became known for his popular expositions and interpretations of the theory. Early years Eddington was born 28 December 1882 in Kendal, Westmorland (now Cumbria), England, the son of Quaker parents, Arthur Henry Eddington, headmaster of the Quaker School, and Sarah Ann Shout. His father taught at a Quaker training college in Lancashire before moving to Kendal to become headmaster of Stramongate School. He died in the typhoid epidemic which swept England in 1884. His mother was left to bring up her two children with relatively little income. The family moved to Weston-super-Mare where at first Stanley (as his mother and sister always called Eddington) was educated at home before spending three years at a preparatory school. The family lived at a house called Varzin, 42 Walliscote Road, Weston-super-Mare. There is a commemorative plaque on the building explaining Sir Arthur's contribution to science. In 1893 Eddington entered Brynmelyn School. He proved to be a most capable scholar, particularly in mathematics and English literature. His performance earned him a scholarship to Owens College, Manchester (what was later to become the University of Manchester), in 1898, which he was able to attend, having turned 16 that year. He spent the first year in a general course, but he turned to physics for the next three years. Eddington was greatly influenced by his physics and mathematics teachers, Arthur Schuster and Horace Lamb. At Manchester, Eddington lived at Dalton Hall, where he came under the lasting influence of the Quaker mathematician J. W. Graham. His progress was rapid, winning him several scholarships and he graduated with a BSc in physics with First Class Honours in 1902. Based on his performance at Owens College, he was awarded a scholarship to Trinity College, Cambridge, in 1902. His tutor at Cambridge was Robert Alfred Herman and in 1904 Eddington became the first ever second-year student to be placed as Senior Wrangler. After receiving his M.A. in 1905, he began research on thermionic emission in the Cavendish Laboratory. This did not go well, and meanwhile he spent time teaching mathematics to first year engineering students. This hiatus was brief. Through a recommendation by E. T. Whittaker, his senior colleague at Trinity College, he secured a position at the Royal Observatory, Greenwich, where he was to embark on his career in astronomy, a career whose seeds had been sown even as a young child when he would often "try to count the stars". Astronomy In January 1906, Eddington was nominated to the post of chief assistant to the Astronomer Royal at the Royal Greenwich Observatory. He left Cambridge for Greenwich the following month. He was put to work on a detailed analysis of the parallax of 433 Eros on photographic plates that had started in 1900. He developed a new statistical method based on the apparent drift of two background stars, winning him the Smith's Prize in 1907. The prize won him a fellowship of Trinity College, Cambridge. In December 1912, George Darwin, son of Charles Darwin, died suddenly, and Eddington was promoted to his chair as the Plumian Professor of Astronomy and Experimental Philosophy in early 1913. Later that year, Robert Ball, holder of the theoretical Lowndean chair, also died, and Eddington was named the director of the entire Cambridge Observatory the next year. In May 1914, he was elected a fellow of the Royal Society: he was awarded the Royal Medal in 1928 and delivered the Bakerian Lecture in 1926. Eddington also investigated the interior of stars through theory, and developed the first true understanding of stellar processes. He began this in 1916 with investigations of possible physical explanations for Cepheid variable stars. He began by extending Karl Schwarzschild's earlier work on radiation pressure in Emden polytropic models. These models treated a star as a sphere of gas held up against gravity by internal thermal pressure, and one of Eddington's chief additions was to show that radiation pressure was necessary to prevent collapse of the sphere. He developed his model despite knowingly lacking firm foundations for understanding opacity and energy generation in the stellar interior. However, his results allowed for calculation of temperature, density and pressure at all points inside a star (thermodynamic anisotropy), and Eddington argued that his theory was so useful for further astrophysical investigation that it should be retained despite not being based on completely accepted physics. James Jeans contributed the important suggestion that stellar matter would certainly be ionized, but that was the end of any collaboration between the pair, who became famous for their lively debates. Eddington defended his method by pointing to the utility of his results, particularly his important mass–luminosity relation. This had the unexpected result of showing that virtually all stars, including giants and dwarfs, behaved as ideal gases. In the process of developing his stellar models, he sought to overturn current thinking about the sources of stellar energy. Jeans and others defended the Kelvin–Helmholtz mechanism, which was based on classical mechanics, while Eddington speculated broadly about the qualitative and quantitative consequences of possible proton–electron annihilation and nuclear fusion processes. Around 1920, he anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even the fact that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington's paper, based on knowledge at the time, reasoned that: The leading theory of stellar energy, the contraction hypothesis (cf. the Kelvin–Helmholtz mechanism), should cause stars' rotation to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening. The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy. Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom, suggesting that if such a combination could happen, it would release considerable energy as a byproduct. If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (We now know that most "ordinary" stars contain far more than 5% hydrogen.) Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time. All of these speculations were proven correct in the following decades. With these assumptions, he demonstrated that the interior temperature of stars must be millions of degrees. In 1924, he discovered the mass–luminosity relation for stars (see Lecchini in ). Despite some disagreement, Eddington's models were eventually accepted as a powerful tool for further investigation, particularly in issues of stellar evolution. The confirmation of his estimated stellar diameters by Michelson in 1920 proved crucial in convincing astronomers unused to Eddington's intuitive, exploratory style. Eddington's theory appeared in mature form in 1926 as The Internal Constitution of the Stars, which became an important text for training an entire generation of astrophysicists. Eddington's work in astrophysics in the late 1920s and the 1930s continued his work in stellar structure, and precipitated further clashes with Jeans and Edward Arthur Milne. An important topic was the extension of his models to take advantage of developments in quantum physics, including the use of degeneracy physics in describing dwarf stars. Dispute with Chandrasekhar on the mass limit of stars The topic of extension of his models precipitated his dispute with Subrahmanyan Chandrasekhar, who was then a student at Cambridge. Chandrasekhar's work presaged the discovery of black holes, which at the time seemed so absurdly non-physical that Eddington refused to believe that Chandrasekhar's purely mathematical derivation had consequences for the real world. Eddington was wrong and his motivation is controversial. Chandrasekhar's narrative of this incident, in which his work is harshly rejected, portrays Eddington as rather cruel and dogmatic. Chandra benefited from his friendship with Eddington. It was Eddington and Milne who put up Chandra's name for the fellowship for the Royal Society which Chandra obtained. An FRS meant he was at the Cambridge high-table with all the luminaries and a very comfortable endowment for research. Eddington's criticism seems to have been based partly on a suspicion that a purely mathematical derivation from relativity theory was not enough to explain the seemingly daunting physical paradoxes that were inherent to degenerate stars, but to have "raised irrelevant objections" in addition, as Thanu Padmanabhan puts it. Relativity During World War I, Eddington was secretary of the Royal Astronomical Society, which meant he was the first to receive a series of letters and papers from Willem de Sitter regarding Einstein's theory of general relativity. Eddington was fortunate in being not only one of the few astronomers with the mathematical skills to understand general relativity, but owing to his internationalist and pacifist views inspired by his Quaker religious beliefs, one of the few at the time who was still interested in pursuing a theory developed by a German physicist. He quickly became the chief supporter and expositor of relativity in Britain. He and Astronomer Royal Frank Watson Dyson organized two expeditions to observe a solar eclipse in 1919 to make the first empirical test of Einstein's theory: the measurement of the deflection of light by the Sun's gravitational field. In fact, Dyson's argument for the indispensability of Eddington's expertise in this test was what prevented Eddington from eventually having to enter military service. When conscription was introduced in Britain on 2 March 1916, Eddington intended to apply for an exemption as a conscientious objector. Cambridge University authorities instead requested and were granted an exemption on the ground of Eddington's work being of national interest. In 1918, this was appealed against by the Ministry of National Service. Before the appeal tribunal in June, Eddington claimed conscientious objector status, which was not recognized and would have ended his exemption in August 1918. A further two hearings took place in June and July, respectively. Eddington's personal statement at the June hearing about his objection to war based on religious grounds is on record. The Astronomer Royal, Sir Frank Dyson, supported Eddington at the July hearing with a written statement, emphasising Eddington's essential role in the solar eclipse expedition to Príncipe in May 1919. Eddington made clear his willingness to serve in the Friends' Ambulance Unit, under the jurisdiction of the British Red Cross, or as a harvest labourer. However, the tribunal's decision to grant a further twelve months' exemption from military service was on condition of Eddington continuing his astronomy work, in particular in preparation for the Príncipe expedition. The war ended before the end of his exemption. After the war, Eddington travelled to the island of Príncipe off the west coast of Africa to watch the solar eclipse of 29 May 1919. During the eclipse, he took pictures of the stars (several stars in the Hyades cluster, including Kappa Tauri of the constellation Taurus) whose line of sight from the Earth happened to be near the Sun's location in the sky at that time of year. This effect is noticeable only during a total solar eclipse when the sky is dark enough to see stars which are normally obscured by the Sun's brightness. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein. Eddington's observations published the next year allegedly confirmed Einstein's theory, and were hailed at the time as evidence of general relativity over the Newtonian model. The news was reported in newspapers all over the world as a major story. Afterward, Eddington embarked on a campaign to popularize relativity and the expedition as landmarks both in scientific development and international scientific relations. It has been claimed that Eddington's observations were of poor quality, and he had unjustly discounted simultaneous observations at Sobral, Brazil, which appeared closer to the Newtonian model, but a 1979 re-analysis with modern measuring equipment and contemporary software validated Eddington's results and conclusions. The quality of the 1919 results was indeed poor compared to later observations, but was sufficient to persuade contemporary astronomers. The rejection of the results from the expedition to Brazil was due to a defect in the telescopes used which, again, was completely accepted and well understood by contemporary astronomers. Throughout this period, Eddington lectured on relativity, and was particularly well known for his ability to explain the concepts in lay terms as well as scientific. He collected many of these into the Mathematical Theory of Relativity in 1923, which Albert Einstein suggested was "the finest presentation of the subject in any language." He was an early advocate of Einstein's general relativity, and an interesting anecdote well illustrates his humour and personal intellectual investment: Ludwik Silberstein, a physicist who thought of himself as an expert on relativity, approached Eddington at the Royal Society's (6 November) 1919 meeting where he had defended Einstein's relativity with his Brazil-Príncipe solar eclipse calculations with some degree of scepticism, and ruefully charged Arthur as one who claimed to be one of three men who actually understood the theory (Silberstein, of course, was including himself and Einstein as the other). When Eddington refrained from replying, he insisted Arthur not be "so shy", whereupon Eddington replied, "Oh, no! I was wondering who the third one might be!" Cosmology Eddington was also heavily involved with the development of the first generation of general relativistic cosmological models. He had been investigating the instability of the Einstein universe when he learned of both Lemaître's 1927 paper postulating an expanding or contracting universe and Hubble's work on the recession of the spiral nebulae. He felt the cosmological constant must have played the crucial role in the universe's evolution from an Einsteinian steady state to its current expanding state, and most of his cosmological investigations focused on the constant's significance and characteristics. In The Mathematical Theory of Relativity, Eddington interpreted the cosmological constant to mean that the universe is "self-gauging". Fundamental theory and the Eddington number During the 1920s until his death, Eddington increasingly concentrated on what he called "fundamental theory" which was intended to be a unification of quantum theory, relativity, cosmology, and gravitation. At first he progressed along "traditional" lines, but turned increasingly to an almost numerological analysis of the dimensionless ratios of fundamental constants. His basic approach was to combine several fundamental constants in order to produce a dimensionless number. In many cases these would result in numbers close to 1040, its square, or its square root. He was convinced that the mass of the proton and the charge of the electron were a "natural and complete specification for constructing a Universe" and that their values were not accidental. One of the discoverers of quantum mechanics, Paul Dirac, also pursued this line of investigation, which has become known as the Dirac large numbers hypothesis. A somewhat damaging statement in his defence of these concepts involved the fine-structure constant, α. At the time it was measured to be very close to 1/136, and he argued that the value should in fact be exactly 1/136 for epistemological reasons. Later measurements placed the value much closer to 1/137, at which point he switched his line of reasoning to argue that one more should be added to the degrees of freedom, so that the value should in fact be exactly 1/137, the Eddington number. Wags at the time started calling him "Arthur Adding-one". This change of stance detracted from Eddington's credibility in the physics community. The current CODATA value is 1/ Eddington believed he had identified an algebraic basis for fundamental physics, which he termed "E-numbers" (representing a certain group – a Clifford algebra). These in effect incorporated spacetime into a higher-dimensional structure. While his theory has long been neglected by the general physics community, similar algebraic notions underlie many modern attempts at a grand unified theory. Moreover, Eddington's emphasis on the values of the fundamental constants, and specifically upon dimensionless numbers derived from them, is nowadays a central concern of physics. In particular, he predicted a number of hydrogen atoms in the Universe 136 × 2256 ≈ 1.57 1079, or equivalently the half of the total number of particles protons + electrons. He did not complete this line of research before his death in 1944; his book Fundamental Theory was published posthumously in 1948. Eddington number for cycling Eddington is credited with devising a measure of a cyclist's long-distance riding achievements. The Eddington number in the context of cycling is defined as the maximum number E such that the cyclist has cycled at least E miles on at least E days. For example, an Eddington number of 70 would imply that the cyclist has cycled at least 70 miles in a day on at least 70 occasions. Achieving a high Eddington number is difficult, since moving from, say, 70 to 75 will (probably) require more than five new long-distance rides, since any rides shorter than 75 miles will no longer be included in the reckoning. Eddington's own life-time E-number was 84. The Eddington number for cycling is analogous to the h-index that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist. Philosophy Idealism Eddington wrote in his book The Nature of the Physical World that "The stuff of the world is mind-stuff." The idealist conclusion was not integral to his epistemology but was based on two main arguments. The first derives directly from current physical theory. Briefly, mechanical theories of the ether and of the behaviour of fundamental particles have been discarded in both relativity and quantum physics. From this, Eddington inferred that a materialistic metaphysics was outmoded and that, in consequence, since the disjunction of materialism or idealism are assumed to be exhaustive, an idealistic metaphysics is required. The second, and more interesting argument, was based on Eddington's epistemology, and may be regarded as consisting of two parts. First, all we know of the objective world is its structure, and the structure of the objective world is precisely mirrored in our own consciousness. We therefore have no reason to doubt that the objective world too is "mind-stuff". Dualistic metaphysics, then, cannot be evidentially supported. But, second, not only can we not know that the objective world is nonmentalistic, we also cannot intelligibly suppose that it could be material. To conceive of a dualism entails attributing material properties to the objective world. However, this presupposes that we could observe that the objective world has material properties. But this is absurd, for whatever is observed must ultimately be the content of our own consciousness, and consequently, nonmaterial. Eddington believed that physics cannot explain consciousness - "light waves are propagated from the table to the eye; chemical changes occur in the retina; propagation of some kind occurs in the optic nerves; atomic changes follow in the brain. Just where the final leap into consciousness occurs is not clear. We do not know the last stage of the message in the physical world before it became a sensation in consciousness". Ian Barbour, in his book Issues in Science and Religion (1966), p. 133, cites Eddington's The Nature of the Physical World (1928) for a text that argues the Heisenberg Uncertainty Principles provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism "the thesis that reality is basically mental". Charles De Koninck points out that Eddington believed in objective reality existing apart from our minds, but was using the phrase "mind-stuff" to highlight the inherent intelligibility of the world: that our minds and the physical world are made of the same "stuff" and that our minds are the inescapable connection to the world. As De Koninck quotes Eddington, Indeterminism Against Albert Einstein and others who advocated determinism, indeterminism—championed by Eddington—says that a physical object has an ontologically undetermined component that is not due to the epistemological limitations of physicists' understanding. The uncertainty principle in quantum mechanics, then, would not necessarily be due to hidden variables but to an indeterminism in nature itself. Eddington proclaimed "It is a consequence of the advent of the quantum theory that physics is no longer pledged to a scheme of deterministic law". Popular and philosophical writings Eddington wrote a parody of The Rubaiyat of Omar Khayyam, recounting his 1919 solar eclipse experiment. It contained the following quatrain: During the 1920s and 30s, Eddington gave numerous lectures, interviews, and radio broadcasts on relativity, in addition to his textbook The Mathematical Theory of Relativity, and later, quantum mechanics. Many of these were gathered into books, including The Nature of the Physical World and New Pathways in Science. His use of literary allusions and humour helped make these difficult subjects more accessible. Eddington's books and lectures were immensely popular with the public, not only because of his clear exposition, but also for his willingness to discuss the philosophical and religious implications of the new physics. He argued for a deeply rooted philosophical harmony between scientific investigation and religious mysticism, and also that the positivist nature of relativity and quantum physics provided new room for personal religious experience and free will. Unlike many other spiritual scientists, he rejected the idea that science could provide proof of religious propositions. His popular writings made him a household name in Great Britain between the world wars. Death Eddington died of cancer in the Evelyn Nursing Home, Cambridge, on 22 November 1944. He was unmarried. His body was cremated at Cambridge Crematorium (Cambridgeshire) on 27 November 1944; the cremated remains were buried in the grave of his mother in the Ascension Parish Burial Ground in Cambridge. Cambridge University's North West Cambridge development has been named Eddington in his honour. Eddington was played by David Tennant in the television film Einstein and Eddington, with Einstein played by Andy Serkis. The film was notable for its groundbreaking portrayal of Eddington as a somewhat repressed gay man. It was first broadcast in 2008. The actor Paul Eddington was a relative, mentioning in his autobiography (in light of his own weakness in mathematics) "what I then felt to be the misfortune" of being related to "one of the foremost physicists in the world". Obituaries Obituary 1 by Henry Norris Russell, Astrophysical Journal 101 (1943–46) 133 Obituary 2 by A. Vibert Douglas, Journal of the Royal Astronomical Society of Canada, 39 (1943–46) 1 Obituary 3 by Harold Spencer Jones and E. T. Whittaker, Monthly Notices of the Royal Astronomical Society 105 (1943–46) 68 Obituary 4 by Herbert Dingle, The Observatory 66 (1943–46) 1 The Times, Thursday, 23 November 1944; pg. 7; Issue 49998; col D: Obituary (unsigned) – Image of cutting available at Honours Awards and honors Smith's Prize (1907) International Honorary Member of the American Academy of Arts and Sciences (1922) Bruce Medal of Astronomical Society of the Pacific (1924) Henry Draper Medal of the National Academy of Sciences (1924) Gold Medal of the Royal Astronomical Society (1924) International Member of the United States National Academy of Sciences (1925) Foreign membership of the Royal Netherlands Academy of Arts and Sciences (1926) Prix Jules Janssen of the Société astronomique de France (French Astronomical Society) (1928) Royal Medal of the Royal Society (1928) Knighthood (1930) International Member of the American Philosophical Society (1931) Order of Merit (1938) Honorary member of the Norwegian Astronomical Society (1939) Hon. Freeman of Kendal, 1930 Named after him Lunar crater Eddington asteroid 2761 Eddington Royal Astronomical Society's Eddington Medal Eddington mission, now cancelled Eddington Tower, halls of residence at the University of Essex Eddington Astronomical Society, an amateur society based in his hometown of Kendal Eddington, a house (group of students, used for in-school sports matches) of Kirkbie Kendal School Eddington, new suburb of North West Cambridge, opened in 2017 Service Gave the Swarthmore Lecture in 1929 Chairman of the National Peace Council 1941–1943 President of the International Astronomical Union; of the Physical Society, 1930–32; of the Royal Astronomical Society, 1921–23 Romanes Lecturer, 1922 Gifford Lecturer, 1927 In popular culture Eddington is a central figure in the short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt" by Bertrand Russell, a work featured in The Mathematical Magpie by Clifton Fadiman. He was portrayed by David Tennant in the television film Einstein and Eddington, a co-production of the BBC and HBO, broadcast in the United Kingdom on Saturday, 22 November 2008, on BBC2. His thoughts on humour and religious experience were quoted in the adventure game The Witness, a production of the Thelka, Inc., released on 26 January 2016. Time placed him on the cover on 16 April 1934. Publications 1914. Stellar Movements and the Structure of the Universe. London: Macmillan. 1918. Report on the relativity theory of gravitation. London, Fleetway Press, Ltd. 1920. Space, Time and Gravitation: An Outline of the General Relativity Theory. Cambridge University Press. 1922. The theory of relativity and its influence on scientific thought 1923. 1952. The Mathematical Theory of Relativity. Cambridge University Press. 1925. The Domain of Physical Science. 2005 reprint: 1926. Stars and Atoms. Oxford: British Association. 1926. The Internal Constitution of Stars. Cambridge University Press. 1928. The Nature of the Physical World. MacMillan. 1935 replica edition: , University of Michigan 1981 edition: (1926–27 Gifford lectures) 1929. Science and the Unseen World. US Macmillan, UK Allen & Unwin. 1980 Reprint Arden Library . 2004 US reprint – Whitefish, Montana : Kessinger Publications: . 2007 UK reprint London, Allen & Unwin (Swarthmore Lecture), with a new foreword by George Ellis. 1930. Why I Believe in God: Science and Religion, as a Scientist Sees It. Arrow/scrollable preview. 1933. The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Cambridge University Press. 1935. New Pathways in Science. Cambridge University Press. 1936. Relativity Theory of Protons and Electrons. Cambridge Univ. Press. 1939. Philosophy of Physical Science. Cambridge University Press. (1938 Tarner lectures at Cambridge) 1946. Fundamental Theory. Cambridge University Press. See also Astronomy Chandrasekhar limit Eddington luminosity (also called the Eddington limit) Gravitational lens Outline of astronomy Stellar nucleosynthesis Timeline of stellar astronomy List of astronomers Science Arrow of time Classical unified field theories Degenerate matter Dimensionless physical constant Dirac large numbers hypothesis (also called the Eddington–Dirac number) Eddington number Introduction to quantum mechanics Luminiferous aether Parameterized post-Newtonian formalism Special relativity Theory of everything (also called "final theory" or "ultimate theory") Timeline of gravitational physics and relativity List of experiments People List of science and religion scholars Other Infinite monkey theorem Numerology Ontic structural realism References Further reading Durham, Ian T., "Eddington & Uncertainty". Physics in Perspective (September – December). Arxiv, History of Physics Lecchini, Stefano, "How Dwarfs Became Giants. The Discovery of the Mass–Luminosity Relation" Bern Studies in the History and Philosophy of Science, pp. 224. (2007) Stanley, Matthew. "An Expedition to Heal the Wounds of War: The 1919 Eclipse Expedition and Eddington as Quaker Adventurer." Isis 94 (2003): 57–89. Stanley, Matthew. "So Simple a Thing as a Star: Jeans, Eddington, and the Growth of Astrophysical Phenomenology" in British Journal for the History of Science, 2007, 40: 53–82. External links Trinity College Chapel Arthur Stanley Eddington (1882–1944) . University of St Andrews, Scotland. Quotations by Arthur Eddington Arthur Stanley Eddington The Bruce Medalists. Russell, Henry Norris, "Review of The Internal Constitution of the Stars by A.S. Eddington". Ap.J. 67, 83 (1928). Experiments of Sobral and Príncipe repeated in the space project in proceeding in fórum astronomical. Biography and bibliography of Bruce medalists: Arthur Stanley Eddington Eddington books: The Nature of the Physical World, The Philosophy of Physical Science, Relativity Theory of Protons and Electrons, and Fundamental Theory 1882 births 1944 deaths Alumni of Trinity College, Cambridge Alumni of the Victoria University of Manchester British anti–World War I activists British astrophysicists British conscientious objectors British Christian pacifists Corresponding Members of the Russian Academy of Sciences (1917–1925) Corresponding Members of the USSR Academy of Sciences British cosmologists British Quakers 20th-century British astronomers Fellows of Trinity College, Cambridge Fellows of the Royal Astronomical Society Fellows of the Royal Society Foreign associates of the National Academy of Sciences Knights Bachelor Members of the Order of Merit Members of the Royal Netherlands Academy of Arts and Sciences People from Kendal Presidents of the Physical Society Presidents of the Royal Astronomical Society Recipients of the Bruce Medal Recipients of the Gold Medal of the Royal Astronomical Society British relativity theorists Royal Medal winners Senior Wranglers 20th-century British physicists Plumian Professors of Astronomy and Experimental Philosophy Presidents of the International Astronomical Union Members of the American Philosophical Society
2275
https://en.wikipedia.org/wiki/Apple%20II
Apple II
The Apple II (stylized as ) is an 8-bit home computer and one of the world's first highly successful mass-produced microcomputer products. It was designed primarily by Steve Wozniak; Jerry Manock developed the design of Apple II's foam-molded plastic case, Rod Holt developed the switching power supply, while Steve Jobs's role in the design of the computer was limited to overseeing Jerry Manock's work on the plastic case. It was introduced by Jobs and Wozniak at the 1977 West Coast Computer Faire, and marks Apple's first launch of a personal computer aimed at a consumer market—branded toward American households rather than businessmen or computer hobbyists. Byte magazine referred to the Apple II, Commodore PET 2001, and TRS-80 as the "1977 Trinity". As the Apple II had the defining feature of being able to display color graphics, the Apple logo was redesigned to have a spectrum of colors. The Apple II is the first model in the Apple II series, followed by Apple II+, Apple IIe, Apple IIc, Apple IIc Plus, and the 16-bit Apple IIGS—all of which remained compatible. Production of the last available model, Apple IIe, ceased in November 1993. History By 1976, Steve Jobs had convinced product designer Jerry Manock (who had formerly worked at Hewlett Packard designing calculators) to create the "shell" for the Apple II—a smooth case inspired by kitchen appliances that concealed the internal mechanics. The earliest Apple II computers were assembled in Silicon Valley and later in Texas; printed circuit boards were manufactured in Ireland and Singapore. The first computers went on sale on June 10, 1977 with an MOS Technology 6502 microprocessor running at 1.023 MHz ( of the NTSC color subcarrier), two game paddles (bundled until 1980, when they were found to violate FCC regulations), 4 KiB of RAM, an audio cassette interface for loading programs and storing data, and the Integer BASIC programming language built into ROMs. The video controller displayed 24 lines by 40 columns of monochrome, uppercase-only text on the screen (the original character set matches ASCII characters 20h to 5Fh), with NTSC composite video output suitable for display on a TV monitor or on a regular TV set (by way of a separate RF modulator). The original retail price of the computer with 4 KiB of RAM was and with the maximum 48 KiB of RAM it was To reflect the computer's color graphics capability, the Apple logo on the casing has rainbow stripes, which remained a part of Apple's corporate logo until early 1998. Perhaps most significantly, the Apple II was a catalyst for personal computers across many industries; it opened the doors to software marketed at consumers. Certain aspects of the system's design were influenced by Atari's arcade video game Breakout (1976), which was designed by Wozniak, who said: "A lot of features of the Apple II went in because I had designed Breakout for Atari. I had designed it in hardware. I wanted to write it in software now". This included his design of color graphics circuitry, the addition of game paddle support and sound, and graphics commands in Integer BASIC, with which he wrote Brick Out, a software clone of his own hardware game. Wozniak said in 1984: "Basically, all the game features were put in just so I could show off the game I was familiar with—Breakout—at the Homebrew Computer Club. It was the most satisfying day of my life [when] I demonstrated Breakout—totally written in BASIC. It seemed like a huge step to me. After designing hardware arcade games, I knew that being able to program them in BASIC was going to change the world." Overview In the May 1977 issue of Byte, Steve Wozniak published a detailed description of his design; the article began, "To me, a personal computer should be small, reliable, convenient to use, and inexpensive." The Apple II used peculiar engineering shortcuts to save hardware and reduce costs, such as: Taking advantage of the way the 6502 processor accesses memory: it occurs only on alternate phases of the clock cycle; video generation circuitry memory access on the otherwise unused phase avoids memory contention issues and interruptions of the video stream; This arrangement simultaneously eliminated the need for a separate refresh circuit for DRAM chips, as video transfer accessed each row of dynamic memory within the timeout period. In addition, it did not require separate RAM chips for video RAM, while the PET and TRS-80 had SRAM chips for video; Apart from the 6502 CPU and a few support chips, the vast majority of the semiconductors used were 74LS low-power Schottky chips; Rather than use a complex analog-to-digital circuit to read the outputs of the game controller, Wozniak used a simple timer circuit, built around a quad 555 timer IC called a 558, whose period is proportional to the resistance of the game controller, and he used a software loop to measure the timers; A single 14.31818 MHz master oscillator (fM) was divided by various ratios to produce all other required frequencies, including microprocessor clock signals (fM/14), video transfer counters, and color-burst samples (fM/4). A solderable jumper on the main board allowed to switch between European 50 Hz and USA 60 Hz video. The text and graphics screens have a complex arrangement. For instance, the scanlines were not stored in sequential areas of memory. This complexity was reportedly due to Wozniak's realization that the method would allow for the refresh of dynamic RAM as a side effect (as described above). This method had no cost overhead to have software calculate or look up the address of the required scanline and avoided the need for significant extra hardware. Similarly, in high-resolution graphics mode, color is determined by pixel position and thus can be implemented in software, saving Wozniak the chips needed to convert bit patterns to colors. This also allowed to draw text with subpixel rendering, since orange and blue pixels appear half a pixel-width farther to the right on the screen than green and purple pixels. The Apple II at first used data cassette storage, like most other microcomputers of the time. In 1978, the company introduced an external -inch floppy disk drive, called Disk II (stylized as Disk ][), attached through a controller card that plugs into one of the computer's expansion slots (usually slot 6). The Disk II interface, created by Wozniak, is regarded as an engineering masterpiece for its economy of electronic components. The approach taken in the Disk II controller is typical of Wozniak's designs. With a few small-scale logic chips and a cheap PROM (programmable read-only memory), he created a functional floppy disk interface at a fraction of the component cost of standard circuit configurations. Case design The first production Apple II computers had hand-molded cases; these had visible bubbles and other lumps in them from the imperfect plastic molding process, which was soon switched to machine molding. In addition, the initial case design had no vent openings, causing high heat buildup from the PCB and resulting in the plastic softening and sagging. Apple added vent holes to the case within three months of production; customers with the original case could have them replaced at no charge. PCB revisions The Apple II's printed circuit board (PCB) underwent several revisions, as Steve Wozniak made modifications to it. The earliest version was known as Revision 0, and the first 6,000 units shipped used it. Later revisions added a color killer circuit to prevent color fringing when the computer was in text mode, as well as modifications to improve the reliability of cassette I/O. Revision 0 Apple IIs powered up in an undefined mode and had garbage on-screen, requiring the user to press Reset. This was eliminated in later board revisions. Revision 0 Apple IIs could display only four colors in hi-res mode, but Wozniak was able to increase this to six hi-res colors on later board revisions. The PCB had three RAM banks for a total of 24 RAM chips. Original Apple IIs had jumper switches to adjust RAM size, and RAM configurations could be 4, 8, 12, 16, 20, 24, 32, 36, or 48 KiB. The three smallest memory configurations used 4kx1 DRAMs, with larger ones using 16kx1 DRAMs, or mix of 4-kilobyte and 16-kilobyte banks (the chips in any one bank have to be the same size). The early Apple II+ models retained this feature, but after a drop in DRAM prices, Apple redesigned the circuit boards without the jumpers, so that only 16kx1 chips were supported. A few months later, they started shipping all machines with a full 48 KiB complement of DRAM. Unlike most machines, all integrated circuits on the Apple II PCB were socketed; although this cost more to manufacture and created the possibility of loose chips causing a system malfunction, it was considered preferable to make servicing and replacement of bad chips easier. The Apple II PCB lacks any means of generating an interrupt request, although expansion cards may generate one. Program code had to stop everything to perform any I/O task; like many of the computer's other idiosyncrasies, this was due to cost reasons and Steve Wozniak assuming interrupts were not needed for gaming or using the computer as a teaching tool. Display and graphics Color on the Apple II series uses a quirk of the NTSC television signal standard, which made color display relatively easy and inexpensive to implement. The original NTSC television signal specification was black and white. Color was added later by adding a 3.58-megahertz subcarrier signal that was partially ignored by black-and-white TV sets. Color is encoded based on the phase of this signal in relation to a reference color burst signal. The result is that the position, size, and intensity of a series of pulses define color information. These pulses can translate into pixels on the computer screen, with the possibility of exploiting composite artifact colors. The Apple II display provides two pixels per subcarrier cycle. When the color burst reference signal is turned on and the computer attached to a color display, it can display green by showing one alternating pattern of pixels, magenta with an opposite pattern of alternating pixels, and white by placing two pixels next to each other. Blue and orange are available by tweaking the pixel offset by half a pixel-width in relation to the color-burst signal. The high-resolution display offers more colors by compressing more (and narrower) pixels into each subcarrier cycle. The coarse, low-resolution graphics display mode works differently, as it can output a pattern of dots per pixel to offer more color options. These patterns are stored in the character generator ROM, and replace the text character bit patterns when the computer is switched to low-res graphics mode. The text mode and low-res graphics mode use the same memory region and the same circuitry is used for both. A single HGR page occupied 8 KiB of RAM; in practice this meant that the user had to have at least 12 KiB of total RAM to use HGR mode and 20 KiB to use two pages. Early Apple II games from the 1977–79 period often ran only in text or low-resolution mode in order to support users with small memory configurations; HGR not being near universally supported by games until 1980. Sound Rather than a dedicated sound-synthesis chip, the Apple II has a toggle circuit that can only emit a click through a built-in speaker or a line-out jack; all other sounds (including two-, three- and, eventually, four-voice music and playback of audio samples and speech synthesis) are generated entirely by software that clicked the speaker at just the right times. Similar techniques are used for cassette storage: cassette output works the same as the speaker, and input is a simple zero-crossing detector that serves as a relatively crude (1-bit) audio digitizer. Routines in machine ROM encode and decode data in frequency-shift keying for the cassette. Programming languages Initially, the Apple II was shipped with Integer BASIC encoded in the motherboard ROM chips. Written by Wozniak, the interpreter enabled users to write software applications without needing to purchase additional development utilities. Written with game programmers and hobbyists in mind, the language only supported the encoding of numbers in 16-bit integer format. Since it only supported integers between -32768 and +32767 (signed 16-bit integer), it was less suitable to business software, and Apple soon received complaints from customers. Because Steve Wozniak was busy developing the Disk II hardware, he did not have time to modify Integer BASIC for floating point support. Apple instead licensed Microsoft's 6502 BASIC to create Applesoft BASIC. Disk users normally purchased a so-called Language Card, which had Applesoft in ROM, and was sat below the Integer BASIC ROM in system memory. The user could switch between either BASIC by typing FP or INT in BASIC prompt. Apple also offered a different version of Applesoft for cassette users, which occupied low memory, and was started by using the LOAD command in Integer BASIC. As shipped, Apple II incorporated a machine code monitor with commands for displaying and altering the computer's RAM, either one byte at a time, or in blocks of 256 bytes at once. This enabled programmers to write and debug machine code programs without further development software. The computer powers on into the monitor ROM, displaying a * prompt. From there, Ctrl+B enters BASIC, or a machine language program can be loaded from cassette. Disk software can be booted with Ctrl+P followed by 6, referring to Slot 6 which normally contained the Disk II controller. A 6502 assembler was soon offered on disk, and later the UCSD compiler and operating system for the Pascal language were made available. The Pascal system requires a 16 KiB RAM card to be installed in the language card position (expansion slot 0) in addition to the full 48 KiB of motherboard memory. Manual The first 1,000 or so Apple IIs shipped in 1977 with a 68-page mimeographed "Apple II Mini Manual", hand-bound with brass paper fasteners. This was the basis for the Apple II Reference Manual, which became known as the Red Book for its red cover, published in January 1978. All existing customers who sent in their warranty cards were sent free copies of the Red Book. The Apple II Reference Manual contained the complete schematic of the entire computer's circuitry, and a complete source listing of the "Monitor" ROM firmware that served as the machine's BIOS. An Apple II manual signed by Steve Jobs in 1980 with the inscription "Julian, your generation is the first to grow up with computers. Go change the world." sold at auction for $787,484 in 2021. Operating system The original Apple II provided an operating system in ROM along with a BASIC variant called Integer BASIC. The only form of storage available was cassette tape which was inefficiently slow and, worse, unreliable. In 1977 when Apple decided against the popular but clunky CP/M operating system for Wozniak's innovative disk controller design, it contracted Shepardson Microsystems for $13,000 to write an Apple DOS for the Apple II series. At Shepardson, Paul Laughton developed the crucial disk drive software in just 35 days, a remarkably short deadline by any standard. Apple's Disk II -inch floppy disk drive was released in 1978. The final and most popular version of this software was Apple DOS 3.3. Apple DOS was superseded by ProDOS, which supported a hierarchical filesystem and larger storage devices. With an optional third-party Z80-based expansion card, the Apple II could boot into the CP/M operating system and run WordStar, dBase II, and other CP/M software. With the release of MousePaint in 1984 and the Apple IIGS in 1986, the platform took on the look of the Macintosh user interface, including a mouse. Apple released Applesoft BASIC in 1977, a more advanced variant of the language which users could run instead of Integer BASIC for more capabilities. Some commercial Apple II software booted directly and did not use standard DOS disk formats. This discouraged the copying or modifying of the software on the disks, and improved loading speed. Third-party devices and applications When the Apple II initially shipped in June 1977, no expansion cards were available for the slots. This meant that the user did not have any way of connecting a modem or a printer. One popular hack involved connecting a teletype machine to the cassette output. Wozniak's open-architecture design and Apple II's multiple expansion slots permitted a wide variety of third-party devices, including peripheral cards, such as serial controllers, display controllers, memory boards, hard disks, networking components, and real-time clocks. There were plug-in expansion cards—such as the Z-80 SoftCard—that permitted Apple II to use the Z80 processor and run programs for the CP/M operating system, including the dBase II database and the WordStar word processor. The Z80 card also allowed the connection to a modem, and thereby to any networks that a user might have access to. In the early days, such networks were scarce. But they expanded significantly with the development of bulletin board systems in later years. There was also a third-party 6809 card that allowed OS-9 Level One to be run. Third-party sound cards greatly improved audio capabilities, allowing simple music synthesis and text-to-speech functions. Apple II accelerator cards doubled or quadrupled the computer's speed. Early Apple IIs were often sold with a Sup'R'Mod, which allowed the composite video signal to be viewed in a television. The Soviet Union radio-electronics industry designed Apple II-compatible computer Agat. Roughly 12,000 Agat 7 and 9 models were produced and they were widely used in Soviet schools. Agat 9 computers could run "Apple II" compatibility and native modes. "Apple II" mode allowed to run wider variety of (presumably pirated) Apple II software, but at the expense of less RAM. Because of that Soviet developers preferred native mode over "Apple II" compatibility mode. Reception Jesse Adams Stein wrote, "As the first company to release a 'consumer appliance' micro-computer, Apple Computer offers us a clear view of this shift from a machine to an appliance." But the company also had "to negotiate the attitudes of its potential buyers, bearing in mind social anxieties about the uptake of new technologies in multiple contexts. The office, the home and the 'office-in-the-home' were implicated in these changing spheres of gender stereotypes and technological development." After seeing a crude, wire-wrapped prototype demonstrated by Wozniak and Steve Jobs in November 1976, Byte predicted in April 1977, that the Apple II "may be the first product to fully qualify as the 'appliance computer' ... a completed system which is purchased off the retail shelf, taken home, plugged in and used". The computer's color graphics capability especially impressed the magazine. The magazine published a favorable review of the computer in March 1978, concluding: "For the user that wants color graphics, the Apple II is the only practical choice available in the 'appliance' computer class." Personal Computer World in August 1978 also cited the color capability as a strength, stating that "the prime reason that anyone buys an Apple II must surely be for the colour graphics". While mentioning the "oddity" of the artifact colors that produced output "that is not always what one wishes to do", it noted that "no-one has colour graphics like this at this sort of price". The magazine praised the sophisticated monitor software, user expandability, and comprehensive documentation. The author concluded that "the Apple II is a very promising machine" which "would be even more of a temptation were its price slightly lower ... for the moment, colour is an Apple II". Although it sold well from the launch, the initial market was to hobbyists and computer enthusiasts. Sales expanded exponentially into the business and professional market, when the spreadsheet program VisiCalc was launched in mid-1979. VisiCalc is credited as the defining killer app in the microcomputer industry. By the end of 1977 Apple had sales of for the fiscal year, which included sales of the Apple I. This puts Apple clearly behind the others of the "holy trinity" of the TRS-80 and Commodore PET, even though the TRS-80 was launched last of the three. However, during the first five years of operations, revenues doubled about every four months. Between September 1977 and September 1980, annual sales grew from to . During this period the sole products of the company were the Apple II and its peripherals, accessories, and software. References External links Additional documentation in Bitsavers PDF Document archive Apple II on Old-computers.com Online Apple II Resource Apple2History.org How the Apple ][ Works! – on YouTube by the 8-Bit Guy Apple II computers Computer-related introductions in 1977 6502-based home computers 8-bit computers Products and services discontinued in 1979 ca:Apple II
2303
https://en.wikipedia.org/wiki/Aramaic
Aramaic
Aramaic (; ; ; ; Western Neo-Aramaic ) is a Northwest Semitic language that originated in the ancient region of Syria, and quickly spread to Mesopotamia, the Southern Levant and eastern Anatolia where it has been continually written and spoken, in different varieties, for over three thousand years, today largely by Assyrians, Mandeans and Mizrahi Jews. Aramaic served as a language of public life and administration of ancient kingdoms and empires, and also as a language of divine worship and religious study. Several modern varieties, the Neo-Aramaic languages, are still spoken by Assyrians (whose dialects are influenced by Akkadian), and Mandeans and Mizrahi Jews and is used as the liturgical language of a number of West Asian churches. Aramaic belongs to the Northwest group of the Semitic language family, which also includes the mutually intelligable Canaanite languages such as Hebrew, Edomite, Moabite, Ekronite, Sutean and Phoenician, as well as Amorite and Ugaritic. Aramaic languages are written in the Aramaic alphabet, a descendant of the Phoenician alphabet, and the most prominent alphabet variant is the Syriac alphabet. The Aramaic alphabet also became a base for the creation and adaptation of specific writing systems in some other Semitic languages of West Asia, such as the Hebrew alphabet and the Arabic alphabet. The Aramaic languages are now considered endangered, since several varieties are used mainly by the older generations. Researchers are working to record and analyze all of the remaining varieties of Neo-Aramaic languages before they or in case they become extinct. Aramaic dialects today form the mother tongues of the Assyrians and Mandaeans, as well as some Mizrahi Jews. Early Aramaic inscriptions date from 11th century BC, placing it among the earliest languages to be written down. Aramaicist Holger Gzella notes, "The linguistic history of Aramaic prior to the appearance of the first textual sources in the ninth century BC remains unknown." History Historically and originally, Aramaic was the language of the Arameans, a Semitic-speaking people of the region between the northern Levant and the northern Tigris valley. By around 1000 BC, the Arameans had a string of kingdoms in what is now part of Syria, Lebanon, Jordan, Turkey and the fringes of southern Mesopotamia (Iraq). Aramaic rose to prominence under the Neo-Assyrian Empire (911–605 BC), under whose influence Aramaic became a prestige language after being adopted as a lingua franca of the empire by Assyrian kings, and its use was spread throughout Mesopotamia, the Levant and parts of Asia Minor, Arabian Peninsula and Ancient Iran under Assyrian rule. At its height, Aramaic was spoken in what is now Iraq, Syria, Lebanon, Palestine/Israel, Jordan, Kuwait, parts of southeast and south central Turkey, Northern parts of the Arabian Peninsula and parts of northwest Iran as well as the southern Caucasus, having gradually replaced several other related Semitic languages. According to the Babylonian Talmud (Sanhedrin 38b), the language spoken by the Bible's first was Aramaic. Aramaic was the language of Jesus ("Yeshua Mkheetha", Yeshu, Eshu, Esho, Isho in Aramaic dialects), who spoke the Galilean dialect during his public ministry, as well as the language of several sections of the Hebrew Bible, including parts of the books of Daniel and Ezra, and also the language of the Targum, the Aramaic translation of the Hebrew Bible. It is also the language of the Jerusalem Talmud, Babylonian Talmud and Zohar. The scribes of the Neo-Assyrian bureaucracy had also used Aramaic, and this practice was subsequently inherited by the succeeding Neo-Babylonian Empire (605–539 BC), and later by the Achaemenid Empire (539–330 BC). Mediated by scribes that had been trained in the language, highly standardized written Aramaic (named by scholars as Imperial Aramaic) progressively also became the lingua franca of public life, trade and commerce throughout the Achaemenid territories. Wide use of written Aramaic subsequently led to the adoption of the Aramaic alphabet and (as logograms) some Aramaic vocabulary in the Pahlavi scripts, which were used by several Middle Iranian languages (including Parthian, Middle Persian, Sogdian, and Khwarazmian). Aramaic was slowly phased out of public life starting from after the conquest of Arabia in the 7th century AD, but never fully died out. Some variants of Aramaic are also retained as sacred languages by certain religious communities. Most notable among them is Classical Syriac, the liturgical language of Syriac Christianity. It is used by several communities, including the Assyrian Church of the East, the Ancient Church of the East, the Chaldean Catholic Church, the Syriac Orthodox Church, the Syriac Catholic Church, the Maronite Church, and also the Saint Thomas Christians (Native Christians) and Syrian Christians (K[Q]naya) of Kerala, India. One of Aramaic liturgical dialects was Mandaic, which besides becoming a vernacular (Neo-Mandaic) also remained the liturgical language of Mandaeism. Syriac was also the liturgical language of several now-extinct gnostic faiths, such as Manichaeism. Neo-Aramaic languages are still spoken in the 21st century as a first language by many communities of Assyrian Christians, Jews (in particular, the Jews of Kurdistan/Iraqi Jews), and Mandaeans of the Near East, and with numbers of fluent speakers ranging approximately from 1 million to 2 million, with the main Akkadian influenced languages among Assyrians being Suret (240,000 speakers) and Turoyo (100,000 speakers); in addition to Western Neo-Aramaic (21,700) which persists in only three villages in the Anti-Lebanon Mountains region in western Syria. They have retained use of the once dominant lingua franca despite subsequent language shifts experienced throughout the Middle East. Name The connection between Chaldean, Syriac, and Samaritan as "Aramaic" was first identified in 1679 by German theologian Johann Wilhelm Hilliger. In 1819–21 Ulrich Friedrich Kopp published his Bilder und Schriften der Vorzeit ("Images and Inscriptions of the Past"), in which he established the basis of the paleographical development of the Northwest Semitic scripts. Kopp criticised Jean-Jacques Barthélemy and other scholars who had characterized all the then-known inscriptions and coins as Phoenician, with "everything left to the Phoenicians and nothing to the Arameans, as if they could not have written at all". Kopp noted that some of the words on the Carpentras Stele corresponded to the Aramaic in the Book of Daniel, and in the Book of Ruth. Josephus and Strabo (the latter citing Posidonius) both stated that the "Syrians" called themselves "Arameans". The Septuagint, the earliest extant full copy of the Hebrew Bible, a Greek translation, used the terms Syria and Syrian where the Masoretic Text, the earliest extant Hebrew copy of the Bible, uses the terms Aramean and Aramaic; numerous later bibles followed the Septuagint's usage, including the King James Version. This connection between the names Syrian and Aramaic was discussed in 1835 by Étienne Marc Quatremère. In historical sources, Aramaic language is designated by two distinctive groups of terms, first of them represented by endonymic (native) names, and the other one represented by various exonymic (foreign in origin) names. Native (endonymic) terms for Aramaic language were derived from the same word root as the name of its original speakers, the ancient Arameans. Endonymic forms were also adopted in some other languages, like ancient Hebrew. In the Torah (Hebrew Bible), "Aram" is used as a proper name of several people including descendants of Shem, Nahor, and Jacob. Ancient Aram, bordering northern Israel and what is now called Syria, is considered the linguistic center of Aramaic, the language of the Arameans who settled the area during the Bronze Age . The language is often mistakenly considered to have originated within Assyria (Iraq). In fact, Arameans carried their language and writing into Mesopotamia by voluntary migration, by forced exile of conquering armies, and by nomadic Chaldean invasions of Babylonia during the period from 1200 to 1000 BC. Unlike in Hebrew, designations for Aramaic language in some other ancient languages were mostly exonymic. In ancient Greek, Aramaic language was most commonly known as the "Syrian language", in relation to the native (non-Greek) inhabitants of the historical region of Syria. Since the name of Syria itself emerged as a variant of Assyria, the biblical Ashur, and Akkadian Ashuru, a complex set of semantic phenomena was created, becoming a subject of interest both among ancient writers and modern scholars. The Koine Greek word (Hebraïstí) has been translated as "Aramaic" in some versions of the Christian New Testament, as Aramaic was at that time the language commonly spoken by the Jews. However, is consistently used in Koine Greek at this time to mean Hebrew and (Syristi) is used to mean Aramaic. In Biblical scholarship, the term "Chaldean" was for many years used as a synonym of Aramaic, due to its use in the book of Daniel and subsequent interpretation by Jerome. Geographic distribution During the Neo-Assyrian and Neo-Babylonian Empires, Arameans, the native speakers of Aramaic, began to settle in greater numbers, at first in Babylonia, and later in Assyria (Upper Mesopotamia, modern-day northern Iraq, northeast Syria, southwest Iran, and southeastern Turkey (what was Armenia at the time). The influx eventually resulted in the Neo-Assyrian Empire (911–605 BC) adopting an Akkadian-influenced Imperial Aramaic as the lingua franca of its empire. This policy was continued by the short-lived Neo-Babylonian Empire and Medes, and all three empires became operationally bilingual in written sources, with Aramaic used alongside Akkadian. The Achaemenid Empire (539–323 BC) continued this tradition, and the extensive influence of these empires led to Aramaic gradually becoming the lingua franca of most of western Asia, Anatolia, the Caucasus, and Egypt. Beginning with the rise of the Rashidun Caliphate in the late 7th century, Arabic gradually replaced Aramaic as the lingua franca of the Near East. However, Aramaic remains a spoken, literary, and liturgical language for local Christians and also some Jews. Aramaic also continues to be spoken by the Assyrians of Iraq, northeastern Syria, southeastern Turkey and northwest Iran, with diaspora communities in Armenia, Georgia, Azerbaijan and southern Russia. The Mandaeans also continue to use Mandaic Aramaic as a liturgical language, although most now speak Arabic as their first language. There are still also a small number of first-language speakers of Western Aramaic varieties in isolated villages in western Syria. Being in contact with other regional languages, some Aramaic dialects were often engaged in mutual exchange of influences, particularly with Arabic, Iranian, and Kurdish. The turbulence of the last two centuries (particularly the Assyrian genocide) has seen speakers of first-language and literary Aramaic dispersed throughout the world. However, there are a number of sizable Assyrian towns in northern Iraq such as Alqosh, Bakhdida, Bartella, Tesqopa, and Tel Keppe, and numerous small villages, where Aramaic is still the main spoken language, and many large cities in this region also have Assyrian Aramaic-speaking communities, particularly Mosul, Erbil, Kirkuk, Dohuk, and al-Hasakah. In Modern Israel, the only native Aramaic speaking population are the Jews of Kurdistan, although the language is dying out. However, Aramaic is also experiencing a revival among Maronites in Israel in Jish. Aramaic languages and dialects Aramaic is often spoken of as a single language, but is in reality a group of related languages. Some Aramaic languages differ more from each other than the Romance languages do among themselves. Its long history, extensive literature, and use by different religious communities are all factors in the diversification of the language. Some Aramaic dialects are mutually intelligible, whereas others are not, not unlike the situation with modern varieties of Arabic. Some Aramaic languages are known under different names; for example, Syriac is particularly used to describe the Eastern Aramaic variety used in Christian ethnic communities in Iraq, southeastern Turkey, northeastern Syria, and northwestern Iran, and Saint Thomas Christians in India. Most dialects can be described as either "Eastern" or "Western", the dividing line being roughly the Euphrates, or slightly west of it. It is also helpful to draw a distinction between those Aramaic languages that are modern living languages (often called "Neo-Aramaic"), those that are still in use as literary languages, and those that are extinct and are only of interest to scholars. Although there are some exceptions to this rule, this classification gives "Modern", "Middle", and "Old" periods, alongside "Eastern" and "Western" areas, to distinguish between the various languages and dialects that are Aramaic. Writing system The earliest Aramaic alphabet was based on the Phoenician alphabet. In time, Aramaic developed its distinctive "square" style. The ancient Israelites and other peoples of Canaan adopted this alphabet for writing their own languages. Thus, it is better known as the Hebrew alphabet. This is the writing system used in Biblical Aramaic and other Jewish writing in Aramaic. The other main writing system used for Aramaic was developed by Christian communities: a cursive form known as the Syriac alphabet. A highly modified form of the Aramaic alphabet, the Mandaic alphabet, is used by the Mandaeans. In addition to these writing systems, certain derivatives of the Aramaic alphabet were used in ancient times by particular groups: the Nabataean alphabet in Petra and the Palmyrene alphabet in Palmyra. In modern times, Turoyo (see below) has sometimes been written in a Latin script. Periodization Periodization of historical development of Aramaic language has been the subject of particular interest for scholars, who proposed several types of periodization, based on linguistic, chronological and territorial criteria. Overlapping terminology, used in different periodizations, led to the creation of several polysemic terms, that are used differently among scholars. Terms like: Old Aramaic, Ancient Aramaic, Early Aramaic, Middle Aramaic, Late Aramaic (and some others, like Paleo-Aramaic), were used in various meanings, thus referring (in scope or substance) to different stages in historical development of Aramaic language. Most commonly used types of periodization are those of Klaus Beyer and Joseph Fitzmyer. Periodization of Klaus Beyer (1929–2014): Old Aramaic, from the earliest records, to 200 AD Middle Aramaic, from 200 AD, to 1200 AD Modern Aramaic, from 1200 AD, up to the modern times Periodization of Joseph Fitzmyer (1920–2016): Old Aramaic, from the earliest records, to regional prominence 700 BC Official Aramaic, from 700 BC, to 200 BC Middle Aramaic, from 200 BC, to 200 AD Late Aramaic, from 200 AD, to 700 AD Modern Aramaic, from 700 AD, up to the modern times Recent periodization of Aaron Butts: Old Aramaic, from the earliest records, to 538 BC Achaemenid Aramaic, from 538 BC, to 333 BC Middle Aramaic, from 333 BC, to 200 AD Late Aramaic, from 200 AD, to 1200 AD Neo-Aramaic, from 1200 AD, up to the modern times Old Aramaic Aramaic's long history and diverse and widespread use has led to the development of many divergent varieties, which are sometimes considered dialects, though they have become distinct enough over time that they are now sometimes considered separate languages. Therefore, there is not one singular, static Aramaic language; each time and place rather has had its own variation. The more widely spoken Eastern Aramaic and Mandaic forms are largely restricted to Assyrian Christian and Mandean gnostic communities in Iraq, northeastern Syria, northwestern Iran and southeastern Turkey, whilst the severely endangered Western Neo-Aramaic is spoken by small communities of Arameans in western Syria, and persisted in Mount Lebanon until as late as the 17th century. The term "Old Aramaic" is used to describe the varieties of the language from its first known use, until the point roughly marked by the rise of the Sasanian Empire (224 AD), dominating the influential, eastern dialect region. As such, the term covers over thirteen centuries of the development of Aramaic. This vast time span includes all Aramaic that is now effectively extinct. Regarding the earliest forms, Beyer suggests that written Aramaic probably dates from the 11th century BCE, as it is established by the 10th century, to which he dates the oldest inscriptions of northern Syria. Heinrichs uses the less controversial date of the 9th century, for which there is clear and widespread attestation. The central phase in the development of Old Aramaic was its official use by the Neo-Assyrian Empire (911–608 BC), Neo-Babylonian Empire (620–539 BC) and Achaemenid Empire (500–330 BC). The period before this, dubbed "Ancient Aramaic", saw the development of the language from being spoken in Aramaean city-states to become a major means of communication in diplomacy and trade throughout Mesopotamia, the Levant and Egypt. After the fall of the Achaemenid Empire, local vernaculars became increasingly prominent, fanning the divergence of an Aramaic dialect continuum and the development of differing written standards. Ancient Aramaic "Ancient Aramaic" refers to the earliest known period of the language, from its origin until it becomes the lingua franca of the Fertile Crescent. It was the language of the Aramean city-states of Damascus, Hamath and Arpad. There are inscriptions that evidence the earliest use of the language, dating from the 10th century BC. These inscriptions are mostly diplomatic documents between Aramaean city-states. The alphabet of Aramaic at this early period seems to be based on the Phoenician alphabet, and there is a unity in the written language. It seems that, in time, a more refined alphabet, suited to the needs of the language, began to develop from this in the eastern regions of Aram. Due to increasing Aramean migration eastward, the Western periphery of Assyria became bilingual in Akkadian and Aramean at least as early as the mid-9th century BC. As the Neo-Assyrian Empire conquered Aramean lands west of the Euphrates, Tiglath-Pileser III made Aramaic the Empire's second official language, and it eventually supplanted Akkadian completely. From 700 BC, the language began to spread in all directions, but lost much of its unity. Different dialects emerged in Assyria, Babylonia, the Levant and Egypt. Around 600 BC, Adon, a Canaanite king, used Aramaic to write to an Egyptian Pharaoh. Imperial Aramaic Around 500 BC, following the Achaemenid (Persian) conquest of Mesopotamia under Darius I, Aramaic (as had been used in that region) was adopted by the conquerors as the "vehicle for written communication between the different regions of the vast empire with its different peoples and languages. The use of a single official language, which modern scholarship has dubbed Official Aramaic or Imperial Aramaic, can be assumed to have greatly contributed to the astonishing success of the Achaemenids in holding their far-flung empire together for as long as they did". In 1955, Richard Frye questioned the classification of Imperial Aramaic as an "official language", noting that no surviving edict expressly and unambiguously accorded that status to any particular language. Frye reclassifies Imperial Aramaic as the lingua franca of the Achaemenid territories, suggesting then that the Achaemenid-era use of Aramaic was more pervasive than generally thought. Imperial Aramaic was highly standardised; its orthography was based more on historical roots than any spoken dialect, and the inevitable influence of Persian gave the language a new clarity and robust flexibility. For centuries after the fall of the Achaemenid Empire (in 330 BC), Imperial Aramaic – or a version thereof near enough for it to be recognisable – would remain an influence on the various native Iranian languages. Aramaic script and – as ideograms – Aramaic vocabulary would survive as the essential characteristics of the Pahlavi scripts. One of the largest collections of Imperial Aramaic texts is that of the Persepolis Administrative Archives, found at Persepolis, which number about five hundred. Many of the extant documents witnessing to this form of Aramaic come from Egypt, and Elephantine in particular (see Elephantine papyri). Of them, the best known is the Story of Ahikar, a book of instructive aphorisms quite similar in style to the biblical Book of Proverbs. Consensus regards the Aramaic portion of the Biblical book of Daniel (i.e., 2:4b–7:28) as an example of Imperial (Official) Aramaic. Achaemenid Aramaic is sufficiently uniform that it is often difficult to know where any particular example of the language was written. Only careful examination reveals the occasional loan word from a local language. A group of thirty Aramaic documents from Bactria have been discovered, and an analysis was published in November 2006. The texts, which were rendered on leather, reflect the use of Aramaic in the 4th century BC Achaemenid administration of Bactria and Sogdia. Biblical Aramaic Biblical Aramaic is the Aramaic found in four discrete sections of the Bible: Ezra – documents from the Achaemenid period (5th century BC) concerning the restoration of the temple in Jerusalem. Daniel – five tales and an apocalyptic vision. Jeremiah 10:11 – a single sentence in the middle of a Hebrew text denouncing idolatry. Genesis – translation of a Hebrew place-name. Biblical Aramaic is a somewhat hybrid dialect. It is theorized that some Biblical Aramaic material originated in both Babylonia and Judaea before the fall of the Achaemenid dynasty. Biblical Aramaic presented various challenges for writers who were engaged in early Biblical studies. Since the time of Jerome of Stridon (d. 420), Aramaic of the Bible was named as "Chaldean" (Chaldaic, Chaldee). That label remained common in early Aramaic studies, and persisted up into the nineteenth century. The "Chaldean misnomer" was eventually abandoned, when modern scholarly analyses showed that Aramaic dialect used in Hebrew Bible was not related to ancient Chaldeans and their language. Post-Achaemenid Aramaic The fall of the Achaemenid Empire ( 334–330 BC), and its replacement with the newly created political order, imposed by Alexander the Great (d. 323 BC) and his Hellenistic successors, marked an important turning point in the history of Aramaic language. During the early stages of the post-Achaemenid era, public use of Aramaic language was continued, but shared with the newly introduced Greek language. By the year 300 BC, all of the main Aramaic-speaking regions came under political rule of the newly created Seleucid Empire that promoted Hellenistic culture, and favored Greek language as the main language of public life and administration. During the 3rd century BCE, Greek overtook Aramaic in many spheres of public communication, particularly in highly Hellenized cities throughout the Seleucid domains. However, Aramaic continued to be used, in its post-Achaemenid form, among upper and literate classes of native Aramaic-speaking communities, and also by local authorities (along with the newly introduced Greek). Post-Achaemenid Aramaic, that bears a relatively close resemblance to that of the Achaemenid period, continued to be used up to the 2nd century BCE. By the end of the 2nd century BC, several variants of Post-Achaemenid Aramaic emerged, bearing regional characteristics. One of them was Hasmonaean Aramaic, the official administrative language of Hasmonaean Judaea (142–37 BC), alongside Hebrew which was the language preferred in religious and some other public uses (coinage). It influenced the Biblical Aramaic of the Qumran texts, and was the main language of non-biblical theological texts of that community. The major Targums, translations of the Hebrew Bible into Aramaic, were originally composed in Hasmonaean Aramaic. It also appears in quotations in the Mishnah and Tosefta, although smoothed into its later context. It is written quite differently from Achaemenid Aramaic; there is an emphasis on writing as words are pronounced rather than using etymological forms. The use of written Aramaic in the Achaemenid bureaucracy also precipitated the adoption of Aramaic(-derived) scripts to render a number of Middle Iranian languages. Moreover, many common words, including even pronouns, particles, numerals, and auxiliaries, continued to written as Aramaic "words" even when writing Middle Iranian languages. In time, in Iranian usage, these Aramaic "words" became disassociated from the Aramaic language and came to be understood as signs (i.e. logograms), much like the symbol '&' is read as "and" in English and the original Latin et is now no longer obvious. Under the early 3rd-century BC Parthian Arsacids, whose government used Greek but whose native language was Parthian, the Parthian language and its Aramaic-derived writing system both gained prestige. This in turn also led to the adoption of the name 'pahlavi' (< parthawi, "of the Parthians") for that writing system. The Persian Sassanids, who succeeded the Parthian Arsacids in the mid-3rd century AD, subsequently inherited/adopted the Parthian-mediated Aramaic-derived writing system for their own Middle Iranian ethnolect as well. That particular Middle Iranian dialect, Middle Persian, i.e. the language of Persia proper, subsequently also became a prestige language. Following the conquest of the Sassanids by the Arabs in the 7th-century, the Aramaic-derived writing system was replaced by the Arabic alphabet in all but Zoroastrian usage, which continued to use the name 'pahlavi' for the Aramaic-derived writing system and went on to create the bulk of all Middle Iranian literature in that writing system. Other regional dialects continued to exist alongside these, often as simple, spoken variants of Aramaic. Early evidence for these vernacular dialects is known only through their influence on words and names in a more standard dialect. However, some of those regional dialects became written languages by the 2nd century BC. These dialects reflect a stream of Aramaic that is not directly dependent on Achaemenid Aramaic, and they also show a clear linguistic diversity between eastern and western regions. Targumic Babylonian Targumic is the later post-Achaemenid dialect found in the Targum Onqelos and Targum Jonathan, the "official" targums. The original, Hasmonaean targums had reached Babylon sometime in the 2nd or 3rd century AD. They were then reworked according to the contemporary dialect of Babylon to create the language of the standard targums. This combination formed the basis of Babylonian Jewish literature for centuries to follow. Galilean Targumic is similar to Babylonian Targumic. It is the mixing of literary Hasmonaean with the dialect of Galilee. The Hasmonaean targums reached Galilee in the 2nd century AD, and were reworked into this Galilean dialect for local use. The Galilean Targum was not considered an authoritative work by other communities, and documentary evidence shows that its text was amended. From the 11th century AD onwards, once the Babylonian Targum had become normative, the Galilean version became heavily influenced by it. Babylonian Documentary Aramaic Babylonian Documentary Aramaic is a dialect in use from the 3rd century AD onwards. It is the dialect of Babylonian private documents, and, from the 12th century, all Jewish private documents are in Aramaic. It is based on Hasmonaean with very few changes. This was perhaps because many of the documents in BDA are legal documents, the language in them had to be sensible throughout the Jewish community from the start, and Hasmonaean was the old standard. Nabataean Nabataean Aramaic was the written language of the Arab kingdom of Nabataea, whose capital was Petra. The kingdom (c. 200 BC – 106 AD) controlled the region to the east of the Jordan River, the Negev, the Sinai Peninsula and the northern Hijaz, and supported a wide-ranging trade network. The Nabataeans used imperial Aramaic for written communications, rather than their native Arabic. Nabataean Aramaic developed from Imperial Aramaic, with some influence from Arabic: "l" is often turned into "n", and there are some Arabic loanwords. Arabic influence on Nabataean Aramaic increased over time. Some Nabataean Aramaic inscriptions date from the early days of the kingdom, but most datable inscriptions are from the first four centuries AD. The language is written in a cursive script which was the precursor to the Arabic alphabet. After annexation by the Romans in 106 AD, most of Nabataea was subsumed into the province of Arabia Petraea, the Nabataeans turned to Greek for written communications, and the use of Aramaic declined. Palmyrene Palmyrene Aramaic is the dialect that was in use in the Syriac city state of Palmyra in the Syrian Desert from 44 BC to 274 AD. It was written in a rounded script, which later gave way to cursive Estrangela. Like Nabataean, Palmyrene was influenced by Arabic, but to a much lesser degree. Eastern dialects In the eastern regions (from Mesopotamia to Persia), dialects like Palmyrene Aramaic and Arsacid Aramaic gradually merged with the regional vernacular dialects, thus creating languages with a foot in Achaemenid and a foot in regional Aramaic. In the Kingdom of Osroene, founded in 132 BCE and centred in Edessa (Urhay), the regional dialect became the official language: Edessan Aramaic (Urhaya), that later came to be known as Classical Syriac. On the upper reaches of the Tigris, East Mesopotamian Aramaic flourished, with evidence from the regions of Hatra (Hatran Aramaic) and Assur (Assurian Aramaic). Tatian, the author of the gospel harmony the Diatessaron came from Assyria, and perhaps wrote his work (172 AD) in East Mesopotamian rather than Syriac or Greek. In Babylonia, the regional dialect was used by the Jewish community, Jewish Old Babylonian (from c. 70 AD). This everyday language increasingly came under the influence of Biblical Aramaic and Babylonian Targumic. The written form of Mandaic, the language of the Mandaean religion, was descended from the Arsacid chancery script. Western dialects The western regional dialects of Aramaic followed a similar course to those of the east. They are quite distinct from the eastern dialects and Imperial Aramaic. Aramaic came to coexist with Canaanite dialects, eventually completely displacing Phoenician in the first century BC and Hebrew around the turn of the fourth century AD. The form of Late Old Western Aramaic used by the Jewish community is best attested, and is usually referred to as Jewish Old Palestinian. Its oldest form is Old East Jordanian, which probably comes from the region of Caesarea Philippi. This is the dialect of the oldest manuscript of the Book of Enoch (c. 170 BC). The next distinct phase of the language is called Old Judaean lasting into the second century AD. Old Judean literature can be found in various inscriptions and personal letters, preserved quotations in the Talmud and receipts from Qumran. Josephus' first, non-extant edition of his The Jewish War was written in Old Judean. The Old East Jordanian dialect continued to be used into the first century AD by pagan communities living to the east of the Jordan. Their dialect is often then called Pagan Old Palestinian, and it was written in a cursive script somewhat similar to that used for Old Syriac. A Christian Old Palestinian dialect may have arisen from the pagan one, and this dialect may be behind some of the Western Aramaic tendencies found in the otherwise eastern Old Syriac gospels (see Peshitta). Languages during Jesus' lifetime It is generally believed by Christian scholars that in the first century, Jews in Judea primarily spoke Aramaic with a decreasing number using Hebrew as their first language, though many learned Hebrew as a liturgical language. Additionally, Koine Greek was the lingua franca of the Near East in trade, among the Hellenized classes (much like French in the 18th, 19th, and 20th centuries in Europe), and in the Roman administration. Latin, the language of the Roman army and higher levels of administration, had almost no impact on the linguistic landscape. In addition to the formal, literary dialects of Aramaic based on Hasmonean and Babylonian, there were a number of colloquial Aramaic dialects. Seven Western Aramaic varieties were spoken in the vicinity of Judea in Jesus' time. They were probably distinctive yet mutually intelligible. Old Judean was the prominent dialect of Jerusalem and Judaea. The region of Ein Gedi spoke the Southeast Judaean dialect. Samaria had its distinctive Samaritan Aramaic, where the consonants "he", "" and "'ayin" all became pronounced as "aleph". Galilean Aramaic, the dialect of Jesus' home region, is only known from a few place names, the influences on Galilean Targumic, some rabbinic literature and a few private letters. It seems to have a number of distinctive features: diphthongs are never simplified into monophthongs. East of the Jordan, the various dialects of East Jordanian were spoken. In the region of Damascus and the Anti-Lebanon Mountains, Damascene Aramaic was spoken (deduced mostly from Modern Western Aramaic). Finally, as far north as Aleppo, the western dialect of Orontes Aramaic was spoken. The three languages, especially Hebrew and Aramaic, influenced one another through loanwords and semantic loans. Hebrew words entered Jewish Aramaic. Most were mostly technical religious words, but a few were everyday words like עץ "wood". Conversely, Aramaic words, such as māmmôn "wealth", were borrowed into Hebrew, and Hebrew words acquired additional senses from Aramaic. For instance, Hebrew ראוי rā'ûi "seen" borrowed the sense "worthy, seemly" from the Aramaic meaning "seen" and "worthy". The Greek of the New Testament preserves some semiticisms, including transliterations of Semitic words. Some are Aramaic, like talitha (ταλιθα), which represents the noun טליתא , and others may be either Hebrew or Aramaic like רבוני Rabbounei (Ραββουνει), which means "my master/great one/teacher" in both languages. Other examples: "Talitha kumi" (טליתא קומי) "Ephphatha" (אתפתח) "Eloi, Eloi, lama sabachthani?" (?אלי, אלי, למה שבקתני) The 2004 film The Passion of the Christ used Aramaic for much of its dialogue, specially reconstructed by a scholar, William Fulco, S.J. Where the appropriate words (in first-century Aramaic) were no longer known, he used the Aramaic of Daniel and fourth-century Syriac and Hebrew as the basis for his work. Middle Aramaic The 3rd century AD is taken as the threshold between Old and Middle Aramaic. During that century, the nature of the various Aramaic languages and dialects began to change. The descendants of Imperial Aramaic ceased to be living languages, and the eastern and western regional languages began to develop vital new literatures. Unlike many of the dialects of Old Aramaic, much is known about the vocabulary and grammar of Middle Aramaic. Eastern Middle Aramaic The dialects of Old Eastern Aramaic continued in Assyria, Mesopotamia, Armenia and Iran as a written language using the Estrangela script from Edessa. Eastern Aramaic comprises Mandean, Assyrian, Babylonian Jewish Aramaic dialects, and Syriac (what emerged as the classical literary dialect of Syriac differs in some small details from the Syriac of the earlier pagan inscriptions from the Edessa area). Syriac Aramaic Syriac Aramaic (also "Classical Syriac") is the literary, liturgical and often spoken language of Syriac Christianity. It originated by the first century AD in the region of Osroene, centered in Edessa, but its golden age was the fourth to eight centuries. This period began with the translation of the Bible into the language: the Peshitta, and the masterful prose and poetry of Ephrem the Syrian. Classical Syriac became the language of the Assyrian Church of the East, and the Syriac Orthodox Church and later the Nestorian Church. Missionary activity led to the spread of Syriac from Mesopotamia and Persia, into Central Asia, India and China. Jewish Babylonian Aramaic Jewish Middle Babylonian is the language employed by Jewish writers in Babylonia between the fourth and the eleventh century. It is most commonly identified with the language of the Babylonian Talmud (which was completed in the seventh century) and of post-Talmudic Geonic literature, which are the most important cultural products of Babylonian Judaism. The most important epigraphic sources for the dialect are the hundreds of incantation bowls written in Jewish Babylonian Aramaic. Mandaic Aramaic The Mandaic language, spoken by the Mandaeans of Iraq and Iran, is a sister dialect to Jewish Babylonian Aramaic, though it is both linguistically and culturally distinct. Classical Mandaic is the language in which the Mandaeans' gnostic religious literature was composed. It is characterized by a highly phonetic orthography and does not make use of vowel diacritics. Western Middle Aramaic The dialects of Old Western Aramaic continued with Jewish Middle Palestinian (in Hebrew "square script"), Samaritan Aramaic (in the old Hebrew script) and Christian Palestinian (in cursive Syriac script). Of these three, only Jewish Middle Palestinian continued as a written language. Samaritan Aramaic The Samaritan Aramaic is earliest attested by the documentary tradition of the Samaritans that can be dated back to the fourth century. Its modern pronunciation is based on the form used in the tenth century. Jewish Palestinian Aramaic In 135, after the Bar Kokhba revolt, many Jewish leaders, expelled from Jerusalem, moved to Galilee. The Galilean dialect thus rose from obscurity to become the standard among Jews in the west. This dialect was spoken not only in Galilee, but also in the surrounding parts. It is the linguistic setting for the Jerusalem Talmud (completed in the 5th century), Palestinian targumim (Jewish Aramaic versions of scripture), and midrashim (biblical commentaries and teaching). The standard vowel pointing for the Hebrew Bible, the Tiberian system (7th century), was developed by speakers of the Galilean dialect of Jewish Middle Palestinian. Classical Hebrew vocalisation, therefore, in representing the Hebrew of this period, probably reflects the contemporary pronunciation of this Aramaic dialect. Middle Judaean Aramaic, the descendant of Old Judaean Aramaic, was no longer the dominant dialect, and was used only in southern Judaea (the variant Engedi dialect continued throughout this period). Likewise, Middle East Jordanian Aramaic continued as a minor dialect from Old East Jordanian Aramaic. The inscriptions in the synagogue at Dura-Europos are either in Middle East Jordanian or Middle Judaean. Christian Palestinian Aramaic This was the language of the Christian Melkite (Chalcedonian) community from the 5th to the 8th century. As a liturgical language, it was used up to the 13th century. It is also been called "Melkite Aramaic" and "Palestinian Syriac". The language itself comes from Old Christian Palestinian Aramaic, but its writing conventions were based on early Middle Syriac, and it was heavily influenced by Greek. For example, the name Jesus, Syriac īšū‘, is written īsūs, a transliteration of the Greek form, in Christian Palestinian. Modern Aramaic As the Western Aramaic languages of the Levant and Lebanon have become nearly extinct in non-liturgical usage, the most prolific speakers of Aramaic dialects in the 21st century are Sureth Eastern Neo-Aramaic speakers, the most numerous being the Northeastern Neo-Aramaic speakers of Mesopotamia. This includes speakers of the Assyrian (235,000 speakers) and Chaldean (216,000 speakers) varieties of Suret and Turoyo (112,000 to 450,000 speakers). Having largely lived in remote areas as insulated communities for over a millennium, the remaining speakers of modern Aramaic dialects, such as the Assyrians and the Arameans, escaped the linguistic pressures experienced by others during the large-scale language shifts that saw the proliferation of other tongues among those who previously did not speak them, most recently the Arabization of the Middle East and North Africa by Arabs beginning with the early Muslim conquests of the seventh century. Modern Eastern Aramaic Modern Eastern Aramaic exists in a wide variety of dialects and languages. There is significant difference between the Aramaic spoken by Assyrian Syriac Christians, Jews, and Mandaeans. The Christian varieties are often called Modern Syriac, Neo-Assyrian or Neo-Syriac, particularly when referring to their literature, being deeply influenced by the old literary and liturgical language, the Syriac language. However, they also have roots in numerous, previously unwritten, local Aramaic varieties and some contain Akkadian language influences, and are not purely the direct descendants of the language of Ephrem the Syrian. The varieties are not all mutually intelligible. The principal Christian varieties are Suret, Assyrian Neo-Aramaic and Chaldean Neo-Aramaic, all belonging to the Northeastern Neo-Aramaic languages and spoken by ethnic Assyrians in Iraq, northeast Syria, southeast Turkey, northwest Iran and in the Assyrian diaspora. The Judeo-Aramaic languages are now mostly spoken in Israel, and most are facing extinction. The Jewish varieties that have come from communities that once lived between Lake Urmia and Mosul are not all mutually intelligible. In some places, for example Urmia, Assyrian Christians and Jews speak mutually unintelligible varieties of Modern Eastern Aramaic in the same place. In others, the Nineveh Plains around Mosul for example, the varieties of these two ethnic communities (Assyrians and Iraqi Jews) are similar enough to allow conversation. Modern Central Neo-Aramaic, being in between Western Neo-Aramaic and Eastern Neo-Aramaic) is generally represented by Turoyo, the language of the Assyrians of Tur Abdin. A related Assyrian language, Mlaḥsô, has recently become extinct. Mandaeans living in the Khuzestan province of Iran and scattered throughout Iraq, speak Neo-Mandaic. It is quite distinct from any other Aramaic variety. Mandaeans number some 50,000–75,000 people, but it is believed Neo-Mandaic may now be spoken fluently by as few as 5000 people, with other Mandaeans having varying degrees of knowledge. Modern Western Aramaic Very little remains of Western Aramaic. Its only remaining vernacular is the Western Neo-Aramaic, which is still spoken in the Aramean villages of Maaloula, al-Sarkha (Bakhah), and Jubb'adin on Syria's side of the Anti-Lebanon Mountains, as well as by some people who migrated from these villages, to Damascus and other larger towns of Syria. All these speakers of Modern Western Aramaic are fluent in Arabic as well. Other Western Aramaic languages, like Jewish Palestinian Aramaic and Samaritan Aramaic, are preserved only in liturgical and literary usage. Phonology Each dialect of Aramaic has its own distinctive pronunciation, and it would not be feasible here to go into all these properties. Aramaic has a phonological palette of 25 to 40 distinct phonemes. Some modern Aramaic pronunciations lack the series of "emphatic" consonants, and some have borrowed from the inventories of surrounding languages, particularly Arabic, Azerbaijani, Kurdish, Persian and Turkish. Vowels As with most Semitic languages, Aramaic can be thought of as having three basic sets of vowels: Open a-vowels Close front i-vowels Close back u-vowels These vowel groups are relatively stable, but the exact articulation of any individual is most dependent on its consonantal setting. The open vowel is an open near-front unrounded vowel ("short" a, somewhat like the first vowel in the English "batter", ). It usually has a back counterpart ("long" a, like the a in "father", , or even tending to the vowel in "caught", ), and a front counterpart ("short" e, like the vowel in "head", ). There is much correspondence between these vowels between dialects. There is some evidence that Middle Babylonian dialects did not distinguish between the short a and short e. In West Syriac dialects, and possibly Middle Galilean, the long a became the o sound. The open e and back a are often indicated in writing by the use of the letters א "alaph" (a glottal stop) or ה "he" (like the English h). The close front vowel is the "long" i (like the vowel in "need", ). It has a slightly more open counterpart, the "long" e, as in the final vowel of "café" (). Both of these have shorter counterparts, which tend to be pronounced slightly more open. Thus, the short close e corresponds with the open e in some dialects. The close front vowels usually use the consonant י y as a mater lectionis. The close back vowel is the "long" u (like the vowel in "school", ). It has a more open counterpart, the "long" o, like the vowel in "show" (). There are shorter, and thus more open, counterparts to each of these, with the short close o sometimes corresponding with the long open a. The close back vowels often use the consonant ו w to indicate their quality. Two basic diphthongs exist: an open vowel followed by י y (ay), and an open vowel followed by ו w (aw). These were originally full diphthongs, but many dialects have converted them to e and o respectively. The so-called "emphatic" consonants (see the next section) cause all vowels to become mid-centralised. Consonants The various alphabets used for writing Aramaic languages have twenty-two letters (all of which are consonants). Some of these letters, though, can stand for two or three different sounds (usually a stop and a fricative at the same point of articulation). Aramaic classically uses a series of lightly contrasted plosives and fricatives: Labial set: פּ\פ p/f and בּ\ב b/v, Dental set: תּ\ת t/θ and דּ\ד d/ð, Velar set: כּ\כ k/x and גּ\ג ɡ/ɣ. Each member of a certain pair is written with the same letter of the alphabet in most writing systems (that is, p and f are written with the same letter), and are near allophones. A distinguishing feature of Aramaic phonology (and that of Semitic languages in general) is the presence of "emphatic" consonants. These are consonants that are pronounced with the root of the tongue retracted, with varying degrees of pharyngealization and velarization. Using their alphabetic names, these emphatics are: ח Ḥêṯ, a voiceless pharyngeal fricative, , ט Ṭêṯ, a pharyngealized t, , ע ʽAyin (or ʽE in some dialects), a pharyngealized glottal stop (sometimes considered to be a voiced pharyngeal approximant), or , צ Ṣāḏê, a pharyngealized s, , ק Qôp, a voiceless uvular stop, . Ancient Aramaic may have had a larger series of emphatics, and some Neo-Aramaic languages definitely do. Not all dialects of Aramaic give these consonants their historic values. Overlapping with the set of emphatics are the "guttural" consonants. They include ח Ḥêṯ and ע ʽAyn from the emphatic set, and add א ʼĀlap̄ (a glottal stop) and ה Hê (as the English "h"). Aramaic classically has a set of four sibilants (ancient Aramaic may have had six): ס, שׂ (as in English "sea"), ז (as in English "zero"), שׁ (as in English "ship"), צ (the emphatic Ṣāḏê listed above). In addition to these sets, Aramaic has the nasal consonants מ m and נ n, and the approximants ר r (usually an alveolar trill), ל l, י y and ו w. Historical sound changes Six broad features of sound change can be seen as dialect differentials: Vowel change occurs almost too frequently to document fully, but is a major distinctive feature of different dialects. Plosive/fricative pair reduction. Originally, Aramaic, like Tiberian Hebrew, had fricatives as conditioned allophones for each plosive. In the wake of vowel changes, the distinction eventually became phonemic; still later, it was often lost in certain dialects. For example, Turoyo has mostly lost , using instead, like Arabic; other dialects (for instance, standard Assyrian Neo-Aramaic) have lost and and replaced them with and , as with Modern Hebrew. In most dialects of Modern Syriac, and are realized as after a vowel. Loss of emphatics. Some dialects have replaced emphatic consonants with non-emphatic counterparts, while those spoken in the Caucasus often have glottalized rather than pharyngealized emphatics. Guttural assimilation is the main distinctive feature of Samaritan pronunciation, also found in Samaritan Hebrew: all the gutturals are reduced to a simple glottal stop. Some Modern Aramaic dialects do not pronounce h in all words (the third person masculine pronoun hu becomes ow). Proto-Semitic */θ/ */ð/ are reflected in Aramaic as */t/, */d/, whereas they became sibilants in Hebrew (the number three is שלוש šālôš in Hebrew but תלת tlāṯ in Aramaic, the word gold is זהב zahav in Hebrew but דהב dehav in Aramaic). Dental/sibilant shifts are still happening in the modern dialects. New phonetic inventory. Modern dialects have borrowed sounds from the dominant surrounding languages. The most frequent borrowings are (as the first consonant in "azure"), (as in "jam") and (as in "church"). The Syriac alphabet has been adapted for writing these new sounds. Grammar As in other Semitic languages, Aramaic morphology (the way words are formed) is based on the consonantal root. The root generally consists of two or three consonants and has a basic meaning, for example, כת״ב k-t-b has the meaning of 'writing'. This is then modified by the addition of vowels and other consonants to create different nuances of the basic meaning: כתבה kṯāḇâ, handwriting, inscription, script, book. כתבי kṯāḇê, books, the Scriptures. כתובה kāṯûḇâ, secretary, scribe. כתבת kiṯḇeṯ, I wrote. אכתב eḵtûḇ, I shall write. Nouns and adjectives Aramaic nouns and adjectives are inflected to show gender, number and state. Aramaic has two grammatical genders: masculine and feminine. The feminine absolute singular is often marked by the ending ה- -â. Nouns can be either singular or plural, but an additional "dual" number exists for nouns that usually come in pairs. The dual number gradually disappeared from Aramaic over time and has little influence in Middle and Modern Aramaic. Aramaic nouns and adjectives can exist in one of three states. To a certain extent, these states correspond to the role of articles and cases in the Indo-European languages: The absolute state is the basic form of a noun. In early forms of Aramaic, the absolute state expresses indefiniteness, comparable to the English indefinite article a(n) (for example, כתבה kṯāḇâ, "a handwriting"), and can be used in most syntactic roles. However, by the Middle Aramaic period, its use for nouns (but not adjectives) had been widely replaced by the emphatic state. The construct state is a form of the noun used to make possessive constructions (for example, כתבת מלכתא kṯāḇat' malkṯâ, "the handwriting of the queen"). In the masculine singular, the form of the construct is often the same as the absolute, but it may undergo vowel reduction in longer words. The feminine construct and masculine construct plural are marked by suffixes. Unlike a genitive case, which marks the possessor, the construct state is marked on the possessed. This is mainly due to Aramaic word order: possessed[const.] possessor[abs./emph.] are treated as a speech unit, with the first unit (possessed) employing the construct state to link it to the following word. In Middle Aramaic, the use of the construct state for all but stock phrases (like בר נשא bar nāšâ, "son of man") begins to disappear. The emphatic or determined state is an extended form of the noun that functions similarly to the definite article. It is marked with a suffix (for example, כתבתא kṯāḇtâ, "the handwriting"). Although its original grammatical function seems to have been to mark definiteness, it is used already in Imperial Aramaic to mark all important nouns, even if they should be considered technically indefinite. This practice developed to the extent that the absolute state became extraordinarily rare in later varieties of Aramaic. Whereas other Northwest Semitic languages, like Hebrew, have the absolute and construct states, the emphatic/determined state is a unique feature to Aramaic. Case endings, as in Ugaritic, probably existed in a very early stage of the language, and glimpses of them can be seen in a few compound proper names. However, as most of those cases were expressed by short final vowels, they were never written, and the few characteristic long vowels of the masculine plural accusative and genitive are not clearly evidenced in inscriptions. Often, the direct object is marked by a prefixed -ל l- (the preposition "to") if it is definite. Adjectives agree with their nouns in number and gender but agree in state only if used attributively. Predicative adjectives are in the absolute state regardless of the state of their noun (a copula may or may not be written). Thus, an attributive adjective to an emphatic noun, as in the phrase "the good king", is written also in the emphatic state מלכא טבא malkâ ṭāḇâking[emph.] good[emph.]. In comparison, the predicative adjective, as in the phrase "the king is good", is written in the absolute state מלכא טב malkâ ṭāḇking[emph.] good[abs.]. The final א- -â in a number of these suffixes is written with the letter aleph. However, some Jewish Aramaic texts employ the letter he for the feminine absolute singular. Likewise, some Jewish Aramaic texts employ the Hebrew masculine absolute singular suffix ים- -îm instead of ין- -în. The masculine determined plural suffix, יא- -ayyâ, has an alternative version, -ê. The alternative is sometimes called the "gentilic plural" for its prominent use in ethnonyms (יהודיא yəhûḏāyê, 'the Jews', for example). This alternative plural is written with the letter aleph, and came to be the only plural for nouns and adjectives of this type in Syriac and some other varieties of Aramaic. The masculine construct plural, -ê, is written with yodh. In Syriac and some other variants this ending is diphthongized to -ai. Possessive phrases in Aramaic can either be made with the construct state or by linking two nouns with the relative particle -[ד[י d[î]-. As the use of the construct state almost disappears from the Middle Aramaic period on, the latter method became the main way of making possessive phrases. For example, the various forms of possessive phrases (for "the handwriting of the queen") are: כתבת מלכתא kṯāḇaṯ malkṯâ – the oldest construction, also known as סמיכות səmîḵûṯ : the possessed object (כתבה kṯābâ, "handwriting") is in the construct state (כתבת kṯāḇaṯ); the possessor (מלכה malkâ, "queen") is in the emphatic state (מלכתא malkṯâ) כתבתא דמלכתא kṯāḇtâ d(î)-malkṯâ – both words are in the emphatic state and the relative particle -[ד[י d[î]- is used to mark the relationship כתבתה דמלכתא kṯāḇtāh d(î)-malkṯâ – both words are in the emphatic state, and the relative particle is used, but the possessed is given an anticipatory, pronominal ending (כתבתה kṯāḇtā-h, "handwriting-her"; literally, "her writing, that (of) the queen"). In Modern Aramaic, the last form is by far the most common. In Biblical Aramaic, the last form is virtually absent. Verbs The Aramaic verb has gradually evolved in time and place, varying between varieties of the language. Verb forms are marked for person (first, second or third), number (singular or plural), gender (masculine or feminine), tense (perfect or imperfect), mood (indicative, imperative, jussive or infinitive) and voice (active, reflexive or passive). Aramaic also employs a system of conjugations, or verbal stems, to mark intensive and extensive developments in the lexical meaning of verbs. Aspectual tense Aramaic has two proper tenses: perfect and imperfect. These were originally aspectual, but developed into something more like a preterite and future. The perfect is unmarked, while the imperfect uses various preformatives that vary according to person, number and gender. In both tenses the third-person singular masculine is the unmarked form from which others are derived by addition of afformatives (and preformatives in the imperfect). In the chart below (on the root כת״ב K-T-B, meaning "to write"), the first form given is the usual form in Imperial Aramaic, while the second is Classical Syriac. Conjugations or verbal stems Like other Semitic languages, Aramaic employs a number of derived verb stems, to extend the lexical coverage of verbs. The basic form of the verb is called the ground stem, or G-stem. Following the tradition of mediaeval Arabic grammarians, it is more often called the Pə‘al פעל (also written Pe‘al), using the form of the Semitic root פע״ל P-‘-L, meaning "to do". This stem carries the basic lexical meaning of the verb. By doubling of the second radical, or root letter, the D-stem or פעל Pa‘‘el is formed. This is often an intensive development of the basic lexical meaning. For example, qəṭal means "he killed", whereas qaṭṭel means "he slew". The precise relationship in meaning between the two stems differs for every verb. A preformative, which can be -ה ha-, -א a- or -ש ša-, creates the C-stem or variously the Hap̄‘el, Ap̄‘el or Šap̄‘el (also spelt הפעל Haph‘el, אפעל Aph‘el and שפעל Shaph‘el). This is often an extensive or causative development of the basic lexical meaning. For example, טעה ṭə‘â means "he went astray", whereas אטעי aṭ‘î means "he deceived". The Šap̄‘el שפעל is the least common variant of the C-stem. Because this variant is standard in Akkadian, it is possible that its use in Aramaic represents loanwords from that language. The difference between the variants הפעל Hap̄‘el and אפעל Ap̄‘el appears to be the gradual dropping of the initial ה h sound in later Old Aramaic. This is noted by the respelling of the older he preformative with א aleph. These three conjugations are supplemented with three further derived stems, produced by the preformative -הת hiṯ- or -את eṯ-. The loss of the initial ה h sound occurs similarly to that in the form above. These three derived stems are the Gt-stem, התפעל Hiṯpə‘el or אתפעל Eṯpə‘el (also written Hithpe‘el or Ethpe‘el), the Dt-stem, התפעּל Hiṯpa‘‘al or אתפעּל Eṯpa‘‘al (also written Hithpa‘‘al or Ethpa‘‘al), and the Ct-stem, התהפעל Hiṯhap̄‘al, אתּפעל Ettap̄‘al, השתפעל Hištap̄‘al or אשתפעל Eštap̄‘al (also written Hithhaph‘al, Ettaph‘al, Hishtaph‘al or Eshtaph‘al). Their meaning is usually reflexive, but later became passive. However, as with other stems, actual meaning differs from verb to verb. Not all verbs use all of these conjugations, and, in some, the G-stem is not used. In the chart below (on the root כת״ב K-T-B, meaning "to write"), the first form given is the usual form in Imperial Aramaic, while the second is Classical Syriac. In Imperial Aramaic, the participle began to be used for a historical present. Perhaps under influence from other languages, Middle Aramaic developed a system of composite tenses (combinations of forms of the verb with pronouns or an auxiliary verb), allowing for narrative that is more vivid. Aramaic syntax usually follows the order verb–subject–object (VSO). Imperial (Persian) Aramaic, however, tended to follow a S-O-V pattern (similar to Akkadian), which was the result of Persian syntactic influence. See also References Sources . External links Ancient Aramaic Audio Files: Contains audio recordings of scripture. The Aramaic Language and Its Classification – Efrem Yildiz, Journal of Assyrian Academic Studies Comprehensive Aramaic Lexicon (including editions of Targums) at the Hebrew Union College, Cincinnati Dictionary of Judeo-Aramaic Jewish Language Research Website: Jewish Aramaic Languages attested from the 10th century BC
2308
https://en.wikipedia.org/wiki/Actinide
Actinide
The actinide () or actinoid () series encompasses the 14 metallic chemical elements with atomic numbers from 89 to 102, actinium through nobelium. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage. All the actinides are f-block elements. Lawrencium is sometimes considered one as well, despite being a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from americium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium and plutonium occupying an intermediate position. All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors. Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium. In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods). Discovery, isolation and synthesis Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium. The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s. At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation. In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission. From actinium to uranium Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman. Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium. Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937. Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when A. G. Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore. Neptunium and above Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life = 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically. Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239: This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal. Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as _{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n. The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles _{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n. The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity. Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf (1950), and a similar procedure yielded in 1949 berkelium-243 from americium-241. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden. In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium (Thomson, 1949) and californium (Thomson, 1950). Sizeable amounts of these elements were produced in 1958 (Burris B. Cunningham and Stanley G. Thomson), and the first californium compound (0.3 µg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann. Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, including uranium-253 and uranium-255, and their β-decay yielded einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrograms) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet. The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory R. Choppin, Bernard G. Harvey and Stanley G. Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time. There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group (Georgy Flyorov et al.) in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22. In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium. Isotopes 32 isotopes of actinium and eight excited isomeric states of some of its nuclides were identified by 2016. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233, it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906. There are 31 known isotopes of thorium ranging in mass number from 208 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U. 28 isotopes of protactinium are known with mass numbers 212–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days. There are 26 known isotopes of uranium, having mass numbers 215–242 (except 220 and 241). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years. There are 24 isotopes of neptunium with mass numbers of 219, 220, and 223–244; they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days). There are 20 known isotopes of plutonium, having mass numbers 228–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years. Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis. Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons. Seventeen isotopes of berkelium were identified with mass numbers 233–234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because of the β-stability of isotopes of curium isotopes with mass number below 248. The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation. Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days). Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission. Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through the electron capture (α-radiation is ≈10%) with the half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability. Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 11 isotopes are known with mass numbers 250–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 13 known isotopes with mass numbers 251–262 and 266. The most stable of them all is 266Lr with a half life of 11 hours. Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U. Formation in nuclear reactors The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am). The formation of actinide nuclides is primarily characterised by: Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow. The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow. Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow. In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes. Distribution in nature Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%). The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes). The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in the uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U. The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically. Extraction Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium: Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution. In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents: Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere: ThO2 + 2 Ca → 2 CaO + Th Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process. Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O. When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure: 2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4− This equation suggests that the best solvent for the uranium carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate. Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen: UO3 + H2 → UO2 + H2O Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal: 4 HF + UO2 → UF4 + 2 H2O To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds. In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium. The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures. Properties Actinides have similar properties to lanthanides. The 6d and 7s electronic shells are filled in actinium and thorium, and the 5f shell is being filled with further increase in atomic number; the 4f shell is filled in the lanthanides. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also Aufbau principle). Physical properties Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 µΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals. All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals. Chemical properties Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides. Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions. Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, the tetravalent thorium compounds are colorless. At pH < 3, the solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents. Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphate, iodate and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element. Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4. Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds. Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions. The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in the aqueous solutions and 3 or 4 in solid compounds. Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids. The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2. Compounds Oxides and hydroxides An – actinide **Depending on the isotopes Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases: An2O3 + 3 H2O → 2 An(OH)3. These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals. Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and curium hydroxide Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am. The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride. Thorium reacting with oxygen exclusively forms the dioxide: Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide} Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids. Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base. Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide (U(OH)4). Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized. Salts *An – actinide **Depending on the isotopes Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable. Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state: U + 2 H2SO4 → U(SO4)2 + 2 H2 2 Pu + 6 HCl → 2 PuCl3 + 3 H2 However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium. Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts forming a hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O. Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone. Applications While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions. The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example: + ⟶ + + 3 Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233. Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons. About half of the produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. So the Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating). The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (See also Nuclear weapon design.) Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs. Plutonium-238 is potentially more efficient isotope for nuclear reactors, since it has smaller critical mass than uranium-235, but it continues to release much thermal energy (0.56 W/g) by decay even when the fission chain reaction is stopped by control rods. Its application is limited by its high price (about US$1000/g). This isotope has been used in thermopiles and water distillation systems of some space satellites and stations. So Galileo and Apollo spacecraft (e.g. Apollo 14) had heaters powered by kilogram quantities of plutonium-238 oxide; this heat is also transformed into electricity with thermopiles. The decay of plutonium-238 produces relatively harmless alpha particles and is not accompanied by gamma-irradiation. Therefore, this isotope (~160 mg) is used as the energy source in heart pacemakers where it lasts about 5 times longer than conventional batteries. Actinium-227 is used as a neutron source. Its high specific energy (14.5 W/g) and the possibility of obtaining significant quantities of thermally stable compounds are attractive for use in long-lasting thermoelectric generators for remote use. 228Ac is used as an indicator of radioactivity in chemical research, as it emits high-energy electrons (2.18 MeV) that can be easily detected. 228Ac-228Ra mixtures are widely used as an intense gamma-source in industry and medicine. Development of self-glowing actinide-doped materials with durable crystalline matrices is a new area of actinide utilization as the addition of alpha-emitting radionuclides to some glasses and crystals may confer luminescence. Toxicity Radioactive substances can harm human health via (i) local skin contamination, (ii) internal exposure due to ingestion of radioactive isotopes, and (iii) external overexposure by β-activity and γ-radiation. Together with radium and transuranium elements, actinium is one of the most dangerous radioactive poisons with high specific α-activity. The most important feature of actinium is its ability to accumulate and remain in the surface layer of skeletons. At the initial stage of poisoning, actinium accumulates in the liver. Another danger of actinium is that it undergoes radioactive decay faster than being excreted. Adsorption from the digestive tract is much smaller (~0.05%) for actinium than radium. Protactinium in the body tends to accumulate in the kidneys and bones. The maximum safe dose of protactinium in the human body is 0.03 µCi that corresponds to 0.5 micrograms of 231Pa. This isotope, which might be present in the air as aerosol, is 2.5 times more toxic than hydrocyanic acid. Plutonium, when entering the body through air, food or blood (e.g. a wound), mostly settles in the lungs, liver and bones with only about 10% going to other organs, and remains there for decades. The long residence time of plutonium in the body is partly explained by its poor solubility in water. Some isotopes of plutonium emit ionizing α-radiation, which damages the surrounding cells. The median lethal dose (LD50) for 30 days in dogs after intravenous injection of plutonium is 0.32 milligram per kg of body mass, and thus the lethal dose for humans is approximately 22 mg for a person weighing 70 kg; the amount for respiratory exposure should be approximately four times greater. Another estimate assumes that plutonium is 50 times less toxic than radium, and thus permissible content of plutonium in the body should be 5 µg or 0.3 µCi. Such amount is nearly invisible under microscope. After trials on animals, this maximum permissible dose was reduced to 0.65 µg or 0.04 µCi. Studies on animals also revealed that the most dangerous plutonium exposure route is through inhalation, after which 5–25% of inhaled substances is retained in the body. Depending on the particle size and solubility of the plutonium compounds, plutonium is localized either in the lungs or in the lymphatic system, or is absorbed in the blood and then transported to the liver and bones. Contamination via food is the least likely way. In this case, only about 0.05% of soluble 0.01% insoluble compounds of plutonium absorbs into blood, and the rest is excreted. Exposure of damaged skin to plutonium would retain nearly 100% of it. Using actinides in nuclear fuel, sealed radioactive sources or advanced materials such as self-glowing crystals has many potential benefits. However, a serious concern is the extremely high radiotoxicity of actinides and their migration in the environment. Use of chemically unstable forms of actinides in MOX and sealed radioactive sources is not appropriate by modern safety standards. There is a challenge to develop stable and durable actinide-bearing materials, which provide safe storage, use and final disposal. A key need is application of actinide solid solutions in durable crystalline host phases. Nuclear properties See also Actinides in the environment Lanthanides Major actinides Minor actinides Transuranics Notes References Bibliography External links Lawrence Berkeley Laboratory image of historic periodic table by Seaborg showing actinide series for the first time Lawrence Livermore National Laboratory, Uncovering the Secrets of the Actinides Los Alamos National Laboratory, Actinide Research Quarterly Periodic table
2314
https://en.wikipedia.org/wiki/Anita%20Hill
Anita Hill
Anita Faye Hill (born July 30, 1956) is an American lawyer, educator and author. She is a professor of social policy, law, and women's studies at Brandeis University and a faculty member of the university's Heller School for Social Policy and Management. She became a national figure in 1991 when she accused U.S. Supreme Court nominee Clarence Thomas, her supervisor at the United States Department of Education and the Equal Employment Opportunity Commission, of sexual harassment. Early life and education Anita Hill was born to a family of farmers in Lone Tree, Oklahoma, the youngest of Albert and Erma Hill's 13 children. Her family came from Arkansas, where her maternal grandfather Henry Eliot and all of her great-grandparents had been born into slavery. Hill was raised in the Baptist faith. Hill graduated from Morris High School, Oklahoma in 1973, where she was class valedictorian. After high school, she enrolled at Oklahoma State University and received a bachelor's degree in psychology with honors in 1977. She studied at Yale Law School, obtaining her Juris Doctor degree with honors in 1980. Early career Hill was admitted to the District of Columbia Bar in 1980 and began her law career as an associate with the Washington, D.C. firm of Wald, Harkrader & Ross. In 1981, she became an attorney-adviser to Clarence Thomas, who was then the Assistant Secretary of the U.S. Department of Education's Office for Civil Rights. When Thomas became chairman of the U.S. Equal Employment Opportunity Commission (EEOC) in 1982, Hill served as his assistant, leaving the job in 1983. Hill then became an assistant professor at the Evangelical Christian O. W. Coburn School of Law at Oral Roberts University where she taught from 1983 to 1986. In 1986, she joined the faculty at the University of Oklahoma College of Law where she taught commercial law and contracts. In 1989, she became the first tenured African American professor at OU. She left the university in 1996 due to ongoing calls for her resignation that began after her 1992 testimony. In 1998, she became a visiting scholar at Brandeis University and, in 2015, a university professor at the school. Allegations of sexual harassment against Clarence Thomas In 1991, President George H. W. Bush nominated Clarence Thomas, a federal circuit judge, to succeed retiring Associate Supreme Court Justice Thurgood Marshall. Senate hearings on his confirmation were initially completed with Thomas's good character being presented as a primary qualification for the high court because he had only been a judge for slightly more than one year. There had been little organized opposition to Thomas's nomination, and his confirmation seemed assured until a report of a private interview of Hill by the FBI was leaked to the press. The hearings were then reopened, and Hill was called to publicly testify. Hill said on October 11, 1991, in televised hearings that Thomas had sexually harassed her while he was her supervisor at the Department of Education and the EEOC. When questioned on why she followed Thomas to the second job after he had already allegedly harassed her, she said working in a reputable position within the civil rights field had been her ambition. The position was appealing enough to inhibit her from going back into private practice with her previous firm. She said that she only realized later in her life that the choice had represented poor judgment on her part, but that "at that time, it appeared that the sexual overtures... had ended." According to Hill, Thomas asked her out socially many times during her two years of employment as his assistant, and after she declined his requests, he used work situations to discuss sexual subjects and push advances. "He spoke about... such matters as women having sex with animals and films showing group sex or rape scenes," she said, adding that on several occasions Thomas graphically described "his own sexual prowess" and the details of his anatomy. Hill also recounted an instance in which Thomas examined a can of Coke on his desk and asked, "Who has put pubic hair on my Coke?" During the hearing, Republican Senator Orrin Hatch implied that "Hill was working in tandem with 'slick lawyers' and interest groups bent on destroying Thomas's chances to join the court." Thomas said he had considered Hill a friend whom he had helped at every turn, so when accusations of harassment came from her they were particularly hurtful and he said, "I lost the belief that if I did my best, all would work out." Four female witnesses waited in the wings to support Hill's credibility, but they were not called, due to what the Los Angeles Times described as a private, compromise deal between Republicans and the Senate Judiciary Committee chair, Democrat Joe Biden. Hill agreed to take a polygraph test. While senators and other authorities observed that polygraph results cannot be relied upon and are inadmissible in courts, Hill's results did support her statements. Thomas did not take a polygraph test. He made a vehement and complete denial, saying that he was being subjected to a "high-tech lynching for uppity blacks" by white liberals who were seeking to block a black conservative from taking a seat on the Supreme Court. After extensive debate, the United States Senate confirmed Thomas to the Supreme Court by a vote of 52–48, the narrowest margin since the 19th century. Members questioned Hill's credibility after the timeline of her events came into question. They mentioned the time delay of ten years between the alleged behavior by Thomas and Hill's accusations, and observed that Hill had followed Thomas to a second job and later had personal contacts with Thomas, including giving him a ride to an airport—behavior which they said would be inexplicable if Hill's allegations were true. Hill countered that she had come forward because she felt an obligation to share information on the character and actions of a person who was being considered for the Supreme Court. She testified that after leaving the EEOC, she had had two "inconsequential" phone conversations with Thomas, and had seen him personally on two occasions, once to get a job reference and the second time when he made a public appearance in Oklahoma where she was teaching. Doubts about the veracity of Hill's 1991 testimony persisted among conservatives long after Thomas took his seat on the Court. They were furthered by right-wing magazine American Spectator writer David Brock in his 1993 book The Real Anita Hill, though he later recanted the claims he had made which he described in his book as "character assassination," and apologized to Hill. After interviewing a number of women who alleged that Thomas had frequently subjected them to sexually explicit remarks, The Wall Street Journal reporters Jane Mayer and Jill Abramson wrote, Strange Justice: The Selling of Clarence Thomas, a book that concluded that Thomas had lied during his confirmation process. Richard Lacayo in his 1994 review of the book for Time magazine remarked, however, that "Their book doesn't quite nail that conclusion." In 2007, Kevin , a co-author of another book on Thomas, remarked that what happened between Thomas and Hill was "ultimately unknowable" by others, but that it was clear that "one of them lied, period." Writing in 2007, Neil Lewis of The New York Times remarked that, "To this day, each side in the epic he-said, she-said dispute has its unmovable believers." In 2007, Thomas published his autobiography, My Grandfather's Son, in which he revisited the controversy, calling Hill his "most traitorous adversary", and writing that pro-choice liberals, who feared he would vote to overturn Roe v. Wade if he were seated on the Supreme Court, used the scandal against him. He described Hill as touchy and apt to overreact, and her work at the EEOC as mediocre. He acknowledged that three other former EEOC employees had backed Hill's story, but said they had all left the agency on bad terms. He also wrote that Hill "was a left-winger who'd never expressed any religious sentiments whatsoever ... and the only reason why she'd held a job in the Reagan administration was because I'd given it to her." Hill denied the accusations in an op-ed in The New York Times saying she would not "stand by silently and allow [Justice Thomas], in his anger, to reinvent me." In October 2010, Thomas's wife Virginia, a conservative activist, left a voicemail at Hill's office asking that Hill apologize for her 1991 testimony. Hill initially believed the call was a hoax and referred the matter to the Brandeis University campus police who alerted the FBI. After being informed that the call was indeed from Virginia Thomas, Hill told the media that she did not believe the message was meant to be conciliatory and said, "I testified truthfully about my experience and I stand by that testimony." Virginia Thomas responded that the call had been intended as an "olive branch". Effects Shortly after the Thomas confirmation hearings, President George H. W. Bush dropped his opposition to a bill that gave harassment victims the right to seek federal damage awards, back pay, and reinstatement, and the law was passed by Congress. One year later, harassment complaints filed with the EEOC were up 50 percent and public opinion had shifted in Hill's favor. Private companies also started training programs to deter sexual harassment. When journalist Cinny Kennard asked Hill in 1991 if she would testify against Thomas all over again, Hill answered, "I'm not sure if I could have lived with myself if I had answered those questions any differently." The manner in which the Senate Judiciary Committee challenged and dismissed Hill's accusations of sexual harassment angered female politicians and lawyers. According to D.C. Congressional Delegate Eleanor Holmes Norton, Hill's treatment by the panel was a contributing factor to the large number of women elected to Congress in 1992. "Women clearly went to the polls with the notion in mind that you had to have more women in Congress," she said. In their anthology, All the Women Are White, All the Blacks Are Men, but Some of Us Are Brave, editors Gloria T. Hull, Patricia Bell-Scott, and Barbara Smith described black feminists mobilizing "a remarkable national response to the Anita Hill–Clarence Thomas controversy. In 1992, a feminist group began a nationwide fundraising campaign and then obtained matching state funds to endow a professorship at the University of Oklahoma College of Law in honor of Hill. Conservative Oklahoma state legislators reacted by demanding Hill's resignation from the university, then introducing a bill to prohibit the university from accepting donations from out-of-state residents, and finally attempting to pass legislation to close down the law school. Elmer Zinn Million, a local activist, compared Hill to Lee Harvey Oswald, the assassin of President Kennedy. Certain officials at the university attempted to revoke Hill's tenure. After five years of pressure, Hill resigned. The University of Oklahoma Law School defunded the Anita F. Hill professorship in May 1999, without the position having ever been filled. On April 25, 2019, the presidential campaign team for Joe Biden for the 2020 United States presidential election disclosed that he had called Hill to express "his regret for what she endured" in his role as the chairman of the Senate Judiciary Committee, presiding over the Thomas confirmation hearings. Hill said the call from Biden left her feeling "deeply unsatisfied". On June 13, 2019, Hill clarified that she did not consider Biden's actions disqualifying, and would be open to voting for him. In May 2020, Hill argued that sexual assault allegations made against Donald Trump as well as the sexual assault allegation against Biden should be investigated and their results "made available to the public." On September 5, 2020, it was reported that Hill had vowed to vote for Biden and to work with him on gender issues. Continued work and advocacy Hill continued to teach at the University of Oklahoma, though she spent two years as a visiting professor in California. She resigned her post in October 1996 and finished her final semester of teaching there. In her final semester, she taught a law school seminar on civil rights. An endowed chair was created in her name, but was later defunded without ever having been filled. Hill accepted a position as a visiting scholar at the Institute for the Study of Social Change at University of California, Berkeley in January 1997, but soon joined the faculty of Brandeis University—first at the Women's Studies Program, later moving to the Heller School for Social Policy and Management. In 2011, she also took a counsel position with the Civil Rights & Employment Practice group of the plaintiffs' law firm Cohen Milstein. Over the years, Hill has provided commentary on gender and race issues on national television programs, including 60 Minutes, Face the Nation, and Meet the Press. She has been a speaker on the topic of commercial law as well as race and women's rights. She is also the author of articles that have been published in The New York Times and Newsweek and has contributed to many scholarly and legal publications in the areas of international commercial law, bankruptcy, and civil rights. In 1995, Hill co-edited Race, Gender and Power in America: The Legacy of the Hill-Thomas Hearings with Emma Coleman Jordan. In 1997 Hill published her autobiography, Speaking Truth to Power, in which she chronicled her role in the Clarence Thomas confirmation controversy and wrote that creating a better society had been a motivating force in her life. She contributed the piece "The Nature of the Beast: Sexual Harassment" to the 2003 anthology Sisterhood Is Forever: The Women's Anthology for a New Millennium, edited by Robin Morgan. In 2011, Hill published her second book, Reimagining Equality: Stories of Gender, Race, and Finding Home, which focuses on the sub-prime lending crisis that resulted in the foreclosure of many homes owned by African-Americans. She calls for a new understanding about the importance of a home and its place in the American Dream. On March 26, 2015, the Brandeis Board of Trustees unanimously voted to recognize Hill with a promotion to Private University Professor of Social Policy, Law, and Women's Studies. On December 16, 2017, the Commission on Sexual Harassment and Advancing Equality in the Workplace was formed, selecting Hill to lead its charge against sexual harassment in the entertainment industry. The new initiative was spearheaded by co-chair of the Nike Foundation Maria Eitel, venture capitalist Freada Kapor Klein, Lucasfilm President Kathleen Kennedy and talent attorney Nina Shaw. The report found not only a saddening prevalence of continued bias but also stark differences in how varying demographics perceived discrimination and harassment. In September 2018, Hill wrote an op-ed in The New York Times regarding sexual assault allegations made by Christine Blasey Ford during the Brett Kavanaugh Supreme Court nomination. On November 8, 2018, Anita Hill spoke at the USC Dornsife's event, "From Social Movement to Social Impact: Putting an End to Sexual Harassment in the Workplace". Writings In 1994, Hill wrote a tribute to Thurgood Marshall, the first African American Supreme Court Justice who preceded Clarence Thomas, titled "A Tribute to Thurgood Marshall: A Man Who Broke with Tradition on Issues of Race and Gender". She outlined Marshall's contributions to the principles of equality as a judge and how his work has affected the lives of African Americans, specifically African American women. On October 20, 1998, Hill published the book Speaking Truth to Power. Throughout much of the book she gives details on her side of the sexual harassment controversy, and her professional relationship with Clarence Thomas. Aside from that, she also provides a glimpse of what her personal life was like all the way from her childhood days growing up in Oklahoma to her position as a law professor. Hill became a proponent for women's rights and feminism. This can be seen through the chapter she wrote in the 2007 book Women and leadership: the state of play and strategies for change. She wrote about women judges and why, in her opinion, they play such a large role in balancing the judicial system. She argues that since women and men have different life experiences, ways of thinking, and histories, both are needed for a balanced court system. She writes that in order for the best law system to be created in the United States, all people need the ability to be represented. In 2011, Hill's second book, Reimagining Equality: Stories of Gender, Race, and Finding Home was published. She discusses the relationship between the home and the American Dream. She also exposes the inequalities within gender and race and home ownership. She argues that inclusive democracy is more important than debates about legal rights. She uses her own history and history of other African American women such as Nannie Helen Burroughs, in order to strengthen her argument for reimagining equality altogether. On September 28, 2021, Hill published the book Believing: Our Thirty-Year Journey to End Gender Violence. Awards and recognition Hill received the American Bar Association's Commission on Women in the Profession's "Women of Achievement" award in 1992. In 2005, Hill was selected as a Fletcher Foundation Fellow. In 2008 she was awarded the Louis P. and Evelyn Smith First Amendment Award by the Ford Hall Forum. She also serves on the board of trustees for Southern Vermont College in Bennington, Vermont. Her opening statement to the Senate Judiciary Committee in 1991 is listed as in American Rhetoric's Top 100 Speeches of the 20th Century (listed by rank). She was inducted into the Oklahoma Women's Hall of Fame in 1993. On January 7, 2017, Hill was inducted as an honorary member of Zeta Phi Beta sorority at their National Executive Board Meeting in Dallas, Texas. The following year, Hill was awarded an honorary LLM degree by Wesleyan University. The Wing's Washington, D.C. location has a phone booth dedicated to Hill. Minor planet 6486 Anitahill, discovered by Eleanor Helin, is named in her honor. The official naming citation was published by the Minor Planet Center on November 8, 2019 (). In popular culture In 1991, the television sitcom Designing Women built its episode "The Strange Case of Clarence and Anita" around the hearings on the Clarence Thomas nomination. The following season in the episode "The Odyssey", the characters imagined what would happen if new president Bill Clinton nominated Anita Hill to the Supreme Court to sit next to Clarence Thomas. Hill is referenced in the 1992 Sonic Youth song "Youth Against Fascism." Her case also inspired the 1994 Law & Order episode "Virtue", about a young lawyer who feels pressured to sleep with her supervisor at her law firm. Anita Hill is mentioned in The X-Files episode "Musings of a Cigarette Smoking Man", which aired November 17, 1996. In the 1996 film Jerry Maguire, after Tom Cruise's character makes a pass at his employee (played by Renee Zellweger), he apologizes with, "I feel like Clarence Thomas." In 1999, Ernest Dickerson directed Strange Justice, a film based on the Anita Hill–Clarence Thomas controversy. Anita Hill is interviewed – unrelated to the Clarence Thomas case – about the film The Tin Drum in the documentary Banned in Oklahoma (2004), included in The Criterion Collection DVD of the film (2004). Hill's testimony is briefly shown in the 2005 movie North Country about the first class action lawsuit surrounding sexual harassment. Hill was the subject of the 2013 documentary film Anita by director Freida Lee Mock, which chronicles her experience during the Clarence Thomas scandal. The actor Kerry Washington portrayed Hill in the 2016 HBO film Confirmation. In 2018, entertainer John Oliver interviewed Hill on his television program Last Week Tonight during which Hill answered various questions and concerns about workplace sexual harassment in the present day. Hill has been interviewed by Stephen Colbert on The Late Show twice, once in 2018 and again in 2021. See also Clarence Thomas Supreme Court nomination Brett Kavanaugh Supreme Court nomination Christine Blasey Ford References External links Faculty profile at Brandeis University Audio lecture: Anita Hill discusses Reimagining Equality: Stories of Gender, Race, and Finding Home on October 4, 2011, on Forum Network. An Outline of the Anita Hill and Clarence Thomas Controversy at Roy Rosenzweig Center for History and New Media African American women speak out on Anita Hill-Clarence Thomas The complete transcripts of the Clarence Thomas--Anita Hill hearings : October 11,12,13, 1991 1956 births Living people 20th-century African-American academics 20th-century American academics 21st-century African-American academics 21st-century American academics 20th-century American lawyers 20th-century American women lawyers 20th-century American non-fiction writers 21st-century American non-fiction writers 20th-century African-American women writers 20th-century African-American writers 20th-century American women writers 21st-century African-American women writers 21st-century African-American writers 21st-century American women writers African-American women lawyers American legal scholars African-American legal scholars American women legal scholars Equal Employment Opportunity Commission members Sexual harassment in the United States American feminists African-American feminists American women non-fiction writers American autobiographers Women autobiographers American political writers American political women Brandeis University faculty University of Oklahoma faculty Oklahoma State University faculty Oral Roberts University faculty Yale Law School alumni Oklahoma State University alumni People from Okmulgee County, Oklahoma Lawyers from Washington, D.C. Writers from Oklahoma Clarence Thomas 20th-century African-American lawyers
2316
https://en.wikipedia.org/wiki/Audio%20file%20format
Audio file format
An audio file format is a file format for storing digital audio data on a computer system. The bit layout of the audio data (excluding metadata) is called the audio coding format and can be uncompressed, or compressed to reduce the file size, often using lossy compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded in a container format or an audio data format with defined storage layer. Format types It is important to distinguish between the audio coding format, the container containing the raw audio data, and an audio codec. A codec performs the encoding and decoding of the raw audio data while this encoded data is (usually) stored in a container file. Although most audio file formats support only one type of audio coding data (created with an audio coder), a multimedia container format (as Matroska or AVI) may support multiple types of audio and video data. There are three major groups of audio file formats: Uncompressed audio formats, such as WAV, AIFF, AU or raw header-less PCM; Formats with lossless compression, such as FLAC, Monkey's Audio (filename extension .ape), WavPack (filename extension .wv), TTA, ATRAC Advanced Lossless, ALAC (filename extension .m4a), MPEG-4 SLS, MPEG-4 ALS, MPEG-4 DST, Windows Media Audio Lossless (WMA Lossless), and Shorten (SHN). Formats with lossy compression, such as Opus, MP3, Vorbis, Musepack, AAC, ATRAC and Windows Media Audio Lossy (WMA lossy). Uncompressed audio format One major uncompressed audio format, LPCM, is the same variety of PCM as used in Compact Disc Digital Audio and is the format most commonly accepted by low level audio APIs and D/A converter hardware. Although LPCM can be stored on a computer as a raw audio format, it is usually stored in a .wav file on Windows or in a .aiff file on macOS. The Audio Interchange File Format (AIFF) format is based on the Interchange File Format (IFF), and the WAV format is based on the similar Resource Interchange File Format (RIFF). WAV and AIFF are designed to store a wide variety of audio formats, lossless and lossy; they just add a small, metadata-containing header before the audio data to declare the format of the audio data, such as LPCM with a particular sample rate, bit depth, endianness and number of channels. Since WAV and AIFF are widely supported and can store LPCM, they are suitable file formats for storing and archiving an original recording. BWF (Broadcast Wave Format) is a standard audio format created by the European Broadcasting Union as a successor to WAV. Among other enhancements, BWF allows more robust metadata to be stored in the file. See European Broadcasting Union: Specification of the Broadcast Wave Format (EBU Technical document 3285, July 1997). This is the primary recording format used in many professional audio workstations in the television and film industry. BWF files include a standardized timestamp reference which allows for easy synchronization with a separate picture element. Stand-alone, file based, multi-track recorders from AETA, Sound Devices, Zaxcom, HHB Communications Ltd, Fostex, Nagra, Aaton, and TASCAM all use BWF as their preferred format. Lossless compressed audio format A lossless compressed audio format stores data in less space without losing any information. The original, uncompressed data can be recreated from the compressed version. Uncompressed audio formats encode both sound and silence with the same number of bits per unit of time. Encoding an uncompressed minute of absolute silence produces a file of the same size as encoding an uncompressed minute of music. In a lossless compressed format, however, the music would occupy a smaller file than an uncompressed format and the silence would take up almost no space at all. Lossless compression formats include FLAC, WavPack, Monkey's Audio, ALAC (Apple Lossless). They provide a compression ratio of about 2:1 (i.e. their files take up half the space of PCM). Development in lossless compression formats aims to reduce processing time while maintaining a good compression ratio. Lossy compressed audio format Lossy audio format enables even greater reductions in file size by removing some of the audio information and simplifying the data. This, of course, results in a reduction in audio quality, but a variety of techniques are used, mainly by exploiting psychoacoustics, to remove the parts of the sound that have the least effect on perceived quality, and to minimize the amount of audible noise added during the process. The popular MP3 format is probably the best-known example, but the AAC format found on the iTunes Music Store is also common. Most formats offer a range of degrees of compression, generally measured in bit rate. The lower the rate, the smaller the file and the more significant the quality loss. List of formats See also Video file format Audio compression (data) Comparison of audio coding formats Comparison of video container formats Comparison of video codecs List of open-source audio codecs Timeline of audio formats References Digital container formats
2321
https://en.wikipedia.org/wiki/Area%2051
Area 51
Area 51 is the common name of a highly classified United States Air Force (USAF) facility within the Nevada Test and Training Range. A remote detachment administered by Edwards Air Force Base, the facility is officially called Homey Airport or Groom Lake (after the salt flat next to its airfield). Details of its operations are not made public, but the USAF says that it is an open training range, and it is commonly thought to support the development and testing of experimental aircraft and weapons systems. The USAF and CIA acquired the site in 1955, primarily for flight testing the Lockheed U-2 aircraft. The intense secrecy surrounding the base has made it the frequent subject of conspiracy theories and a central component of unidentified flying object (UFO) folklore. It has never been declared a secret base, but all research and occurrences in Area 51 are Top Secret/Sensitive Compartmented Information (TS/SCI). The CIA publicly acknowledged the base's existence on 25 June 2013, following a Freedom of Information Act (FOIA) request filed in 2005 and declassified documents detailing its history and purpose. Area 51 is located in the southern portion of Nevada, north-northwest of Las Vegas. The surrounding area is a popular tourist destination, including the small town of Rachel on the "Extraterrestrial Highway". Geography Area 51 The original rectangular base of is now part of the so-called "Groom box", a rectangular area, measuring , of restricted airspace. The area is connected to the internal Nevada Test Site (NTS) road network, with paved roads leading south to Mercury and west to Yucca Flat. Leading northeast from the lake, the wide and well-maintained Groom Lake Road runs through a pass in the Jumbled Hills. The road formerly led to mines in the Groom basin but has been improved since their closure. Its winding course runs past a security checkpoint, but the restricted area around the base extends farther east. After leaving the restricted area, Groom Lake Road descends eastward to the floor of the Tikaboo Valley, passing the dirt-road entrances to several small ranches, before converging with State Route 375, the "Extraterrestrial Highway", south of Rachel. Area 51 shares a border with the Yucca Flat region of the Nevada Test Site, the location of 739 of the 928 nuclear tests conducted by the United States Department of Energy at NTS. The Yucca Mountain nuclear waste repository is southwest of Groom Lake. Groom Lake Groom Lake is a salt flat in Nevada used for runways of the Nellis Bombing Range Test Site airport (XTA/KXTA) on the north of the Area 51 USAF military installation. The lake at elevation is approximately from north to south and from east to west at its widest point. Located within the namesake Groom Lake Valley portion of the Tonopah Basin, the lake is south of Rachel, Nevada. History The origin of the name "Area 51" is unclear. It is believed to be from an Atomic Energy Commission (AEC) numbering grid, although Area 51 is not part of this system; it is adjacent to Area 15. Another explanation is that 51 was used because it was unlikely that the AEC would use the number. According to the Central Intelligence Agency (CIA), the correct names for the facility are Homey Airport (XTA/KXTA) and Groom Lake, though the name "Area 51" was used in a CIA document from the Vietnam War. The facility has also been referred to as "Dreamland" and "Paradise Ranch", among other nicknames, with the former also being the approach control call sign for the surrounding area. The USAF public relations has referred to the facility as "an operating location near Groom Dry Lake". The special use airspace around the field is referred to as Restricted Area 4808 North (R-4808N). Lead and silver were discovered in the southern part of the Groom Range in 1864, and the English company Groome Lead Mines Limited financed the Conception Mines in the 1870s, giving the district its name (nearby mines included Maria, Willow, and White Lake). J. B. Osborne and partners acquired the controlling interest in Groom in 1876, and Osbourne's son acquired it in the 1890s. Mining continued until 1918, then resumed after World War II until the early 1950s. The airfield on the Groom Lake site began service in 1942 as Indian Springs Air Force Auxiliary Field and consisted of two unpaved 5,000-foot (1,524 m) runways. U-2 program The Central Intelligence Agency (CIA) established the Groom Lake test facility in April 1955 for Project AQUATONE: the development of the Lockheed U-2 strategic reconnaissance aircraft. Project director Richard M. Bissell Jr. understood that the flight test and pilot training programs could not be conducted at Edwards Air Force Base or Lockheed's Palmdale facility, given the extreme secrecy surrounding the project. He conducted a search for a suitable testing site for the U-2 under the same extreme security as the rest of the project. He notified Lockheed, who sent an inspection team out to Groom Lake. According to Lockheed's U-2 designer Kelly Johnson: The lake bed made an ideal strip for testing aircraft, and the Emigrant Valley's mountain ranges and the NTS perimeter protected the site from visitors; it was about north of Las Vegas. The CIA asked the AEC to acquire the land, designated "Area 51" on the map, and to add it to the Nevada Test Site. Johnson named the area "Paradise Ranch" to encourage workers to move to "the new facility in the middle of nowhere", as the CIA later described it, and the name became shortened to "the Ranch". On 4May 1955, a survey team arrived at Groom Lake and laid out a north–south runway on the southwest corner of the lakebed and designated a site for a base support facility. The Ranch initially consisted of little more than a few shelters, workshops, and trailer homes in which to house its small team. A little over three months later, the base consisted of a single paved runway, three hangars, a control tower, and rudimentary accommodations for test personnel. The base's few amenities included a movie theater and volleyball court. There was also a mess hall, several wells, and fuel storage tanks. CIA, Air Force, and Lockheed personnel began arriving by July 1955. The Ranch received its first U-2 delivery on 24 July 1955 from Burbank on a C-124 Globemaster II cargo plane, accompanied by Lockheed technicians on a Douglas DC-3. Regular Military Air Transport Service flights were set up between Area 51 and Lockheed's offices in Burbank, California. To preserve secrecy, personnel flew to Nevada on Monday mornings and returned to California on Friday evenings. OXCART program Project OXCART was established in August 1959 for "antiradar studies, aerodynamic structural tests, and engineering designs" and all later work on the Lockheed A-12. This included testing at Groom Lake, which had inadequate facilities consisting of buildings for only 150 people, a asphalt runway, and limited fuel, hangar, and shop space. Groom Lake had received the name "Area 51" when A-12 test facility construction began in September 1960, including a new runway to replace the existing runway. Reynolds Electrical and Engineering Company (REECo) began construction of "Project 51" on 1October 1960 with double-shift construction schedules. The contractor upgraded base facilities and built a new runway (14/32) diagonally across the southwest corner of the lakebed. They marked an Archimedean spiral on the dry lake approximately two miles across so that an A-12 pilot approaching the end of the overrun could abort instead of plunging into the sagebrush. Area 51 pilots called it "The Hook". For crosswind landings, they marked two unpaved airstrips (runways 9/27 and 03/21) on the dry lakebed. By August 1961, construction of the essential facilities was complete; three surplus Navy hangars were erected on the base's north side while hangar7 was new construction. The original U-2 hangars were converted to maintenance and machine shops. Facilities in the main cantonment area included workshops and buildings for storage and administration, a commissary, a control tower, a fire station, and housing. The Navy also contributed more than 130 surplus Babbitt duplex housing units for long-term occupancy facilities. Older buildings were repaired, and additional facilities were constructed as necessary. A reservoir pond surrounded by trees served as a recreational area one mile north of the base. Other recreational facilities included a gymnasium, a movie theater, and a baseball diamond. A permanent aircraft fuel tank farm was constructed by early 1962 for the special JP-7 fuel required by the A-12. Seven tanks were constructed, with a total capacity of 1,320,000 gallons. Security was enhanced for the arrival of OXCART and the small mine was closed in the Groom basin. In January 1962, the Federal Aviation Administration (FAA) expanded the restricted airspace in the vicinity of Groom Lake, and the lakebed became the center of a 600-square mile addition to restricted area R-4808N. The CIA facility received eight USAF F-101 Voodoos for training, two T-33 Shooting Star trainers for proficiency flying, a C-130 Hercules for cargo transport, a U-3A for administrative purposes, a helicopter for search and rescue, and a Cessna 180 for liaison use, and Lockheed provided an F-104 Starfighter for use as a chase plane. The first A-12 test aircraft was covertly trucked from Burbank on 26 February 1962 and arrived at Groom Lake on 28 February. It made its first flight 26 April 1962 when the base had over 1,000 personnel. The closed airspace above Groom Lake was within the Nellis Air Force Range airspace, and pilots saw the A-12 20 to 30 times. Groom was also the site of the first Lockheed D-21 drone test flight on 22 December 1964. By the end of 1963, nine A-12s were at Area 51, assigned to the CIA-operated "1129th Special Activities Squadron". D-21 Tagboard Following the loss of Gary Powers' U-2 over the Soviet Union, there were several discussions about using the A-12 OXCART as an unpiloted drone aircraft. Although Kelly Johnson had come to support the idea of drone reconnaissance, he opposed the development of an A-12 drone, contending that the aircraft was too large and complex for such a conversion. However, the Air Force agreed to fund the study of a high-speed, high-altitude drone aircraft in October 1962. The Air Force interest seems to have moved the CIA to take action, the project designated "Q-12". By October 1963, the drone's design had been finalized. At the same time, the Q-12 underwent a name change. To separate it from the other A-12-based projects, it was renamed the "D-21". (The "12" was reversed to "21"). "Tagboard" was the project's code name. The first D-21 was completed in the spring of 1964 by Lockheed. After four more months of checkouts and static tests, the aircraft was shipped to Groom Lake and reassembled. It was to be carried by a two-seat derivative of the A-12, designated the "M-21". When the D-21/M-21 reached the launch point, the first step would be to blow off the D-21's inlet and exhaust covers. With the D-21/M-21 at the correct speed and altitude, the LCO would start the ramjet and the other systems of the D-21. "With the D-21's systems activated and running, and the launch aircraft at the correct point, the M-21 would begin a slight pushover, the LCO would push a final button, and the D-21 would come off the pylon". Difficulties were addressed throughout 1964 and 1965 at Groom Lake with various technical issues. Captive flights showed unforeseen aerodynamic difficulties. By late January 1966, more than a year after the first captive flight, everything seemed ready. The first D-21 launch was made on 5March 1966 with a successful flight, with the D-21 flying 120 miles with limited fuel. A second D-21 flight was successful in April 1966 with the drone flying 1,200 miles, reaching Mach 3.3 and 90,000 feet. An accident on 30 July 1966 with a fully fueled D-21, on a planned checkout flight, suffered from an unstart of the drone after its separation, causing it to collide with the M-21 launch aircraft. The two crewmen ejected and landed in the ocean 150 miles offshore. One crew member was picked up by a helicopter, but the other, having survived the aircraft breakup and ejection, drowned when sea water entered his pressure suit. Kelly Johnson personally cancelled the entire program, having had serious doubts about its feasibility from the start. A number of D-21s had already been produced, and rather than scrapping the whole effort, Johnson again proposed to the Air Force that they be launched from a B-52H bomber. By late summer of 1967, the modification work to both the D-21 (now designated D-21B) and the B-52Hs was complete. The test program could now resume. The test missions were flown out of Groom Lake, with the actual launches over the Pacific. The first D-21B to be flown was Article 501, the prototype. The first attempt was made on 28 September 1967 and ended in complete failure. As the B-52 was flying toward the launch point, the D-21B fell off the pylon. The B-52H gave a sharp lurch as the drone fell free. The booster fired and was "quite a sight from the ground". The failure was traced to a stripped nut on the forward right attachment point on the pylon. Several more tests were made, none of which met with success. However, the fact is that the resumptions of D-21 tests took place against a changing reconnaissance background. The A-12 had finally been allowed to deploy, and the SR-71 was soon to replace it. At the same time, new developments in reconnaissance satellite technology were nearing operation. Up to this point, the limited number of satellites available restricted coverage to the Soviet Union. A new generation of reconnaissance satellites could soon cover targets anywhere in the world. The satellites' resolution would be comparable to that of aircraft but without the slightest political risk. Time was running out for the Tagboard. Several more test flights, including two over China, were made from Beale AFB, California, in 1969 and 1970, to varying degrees of success. On 15 July 1971, Kelly Johnson received a wire canceling the D-21B program. The remaining drones were transferred by a C-5A and placed in dead storage. The tooling used to build the D-21Bs was ordered destroyed. Like the A-12 Oxcart, the D-21B Tagboard drones remained a Black airplane, even in retirement. Their existence was not suspected until August 1976, when the first group was placed in storage at the Davis-Monthan AFB Military Storage and Disposition Center. A second group arrived in 1977. They were labeled "GTD-21Bs" (GT stood for ground training). Davis-Monthan is an open base, with public tours of the storage area at the time, so the odd-looking drones were soon spotted and photos began appearing in magazines. Speculation about the D-21Bs circulated within aviation circles for years, and it was not until 1982 that details of the Tagboard program were released. However, it was not until 1993 that the B-52/D-21B program was made public. That same year, the surviving D-21Bs were released to museums. Foreign technology evaluation During the Cold War, one of the missions carried out by the United States was the test and evaluation of captured Soviet fighter aircraft. Beginning in the late 1960s, and for several decades, Area 51 played host to an assortment of Soviet-built aircraft. Munir Redfas defection with a Mikoyan-Gurevich MiG-21 from Iraq for Israel's Mossad in Operation Diamond led to the HAVE DOUGHNUT, HAVE DRILL and HAVE FERRY programs. The first MiGs flown in the United States were used to evaluate the aircraft in performance, technical, and operational capabilities, pitting the types against U.S. fighters. This was not a new mission, as testing of foreign technology by the USAF began during World War II. After the war, testing of acquired foreign technology was performed by the Air Technical Intelligence Center (ATIC, which became very influential during the Korean War), under the direct command of the Air Materiel Control Department. In 1961, ATIC became the Foreign Technology Division (FTD) and was reassigned to Air Force Systems Command. ATIC personnel were sent anywhere where foreign aircraft could be found. The focus of Air Force Systems Command limited the use of the fighter as a tool with which to train the front line tactical fighter pilots. Air Force Systems Command recruited its pilots from the Air Force Flight Test Center at Edwards Air Force Base, California, who were usually graduates from various test pilot schools. Tactical Air Command selected its pilots primarily from the ranks of the Weapons School graduates. In August 1966, Iraqi Air Force fighter pilot Captain Munir Redfa defected, flying his MiG-21 to Israel after being ordered to attack Iraqi Kurd villages with napalm. His aircraft was transferred to Groom Lake in late 1967 for study. Israel loaned the MiG-21 to the US Air Force from January 1968 to April 1968. In 1968, the US Air Force and Navy jointly formed a project known as HAVE DOUGHNUT in which Air Force Systems Command, Tactical Air Command, and the U.S. Navy's Air Test and Evaluation Squadron Four (VX-4) flew this acquired Soviet made aircraft in simulated air combat training. As U.S. possession of the Soviet MiG-21 was, itself, secret, it was tested at Groom Lake. A joint Air Force-Navy team was assembled for a series of dogfight tests. Comparisons between the F-4 and the MiG-21 indicated that, on the surface, they were evenly matched. The HAVE DOUGHNUT tests showed the skill of the man in the cockpit was what made the difference. When the Navy or Air Force pilots flew the MiG-21, the results were a draw; the F-4 would win some fights, the MiG-21 would win others. There were no clear advantages. The problem was not with the planes, but with the pilots flying them. The pilots would not fly either plane to its limits. One of the Navy pilots was Marland W. "Doc" Townsend, then commander of VF-121, the F-4 training squadron at NAS Miramar. He was an engineer and a Korean War veteran and had flown almost every navy aircraft. When he flew against the MiG-21, he would outmaneuver it every time. The Air Force pilots would not go vertical in the MiG-21. The HAVE DOUGHNUT project officer was Tom Cassidy, a pilot with VX-4, the Navy's Air Development Squadron at Point Mugu. He had been watching as Townsend "waxed" the Air Force MiG-21 pilots. Cassidy climbed into the MiG-21 and went up against Townsend's F-4. This time the result was far different. Cassidy was willing to fight in the vertical, flying the plane to the point where it was buffeting, just above the stall. Cassidy was able to get on the F-4's tail. After the flight, they realized the MiG-21 turned better than the F-4 at lower speeds. The key was for the F-4 to keep its speed up. An F-4 had defeated the MiG-21; the weakness of the Soviet plane had been found. Further test flights confirmed what was learned. It was also clear that the MiG-21 was a formidable enemy. United States pilots would have to fly much better than they had been to beat it. This would require a special school to teach advanced air combat techniques. On 12 August 1968, two Syrian air force lieutenants, Walid Adham and Radfan Rifai, took off in a pair of MiG-17Fs on a training mission. They lost their way and, believing they were over Lebanon, landed at the Betzet Landing Field in northern Israel. (One version has it that they were led astray by an Arabic-speaking Israeli). Prior to the end of 1968 these MiG-17s were transferred from Israeli stocks and added to the Area 51 test fleet. The aircraft were given USAF designations and fake serial numbers so that they could be identified in DOD standard flight logs. As in the earlier program, a small group of Air Force and Navy pilots conducted mock dogfights with the MiG-17s. Selected instructors from the Navy's Top Gun school at NAS Miramar, California, were chosen to fly against the MiGs for familiarization purposes. Very soon, the MiG-17's shortcomings became clear. It had an extremely simple, even crude, control system that lacked the power-boosted controls of American aircraft. The F-4's twin engines were so powerful it could accelerate out of range of the MiG-17's guns in thirty seconds. It was important for the F-4 to keep its distance from the MiG-17. As long as the F-4 was one and a half miles from the MiG-17, it was outside the reach of the Soviet fighter's guns, but the MiG was within reach of the F-4's missiles. The data from the HAVE DOUGHNUT and HAVE DRILL tests were provided to the newly formed Top Gun school at NAS Miramar. By 1970, the HAVE DRILL program was expanded; a few selected fleet F-4 crews were given the chance to fight the MiGs. The most important result of Project HAVE DRILL is that no Navy pilot who flew in the project defeated the MiG-17 Fresco in the first engagement. The HAVE DRILL dogfights were by invitation only. The other pilots based at Nellis Air Force Base were not to know about the U.S.-operated MiGs. To prevent any sightings, the airspace above the Groom Lake range was closed. On aeronautical maps, the exercise area was marked in red ink. The forbidden zone became known as "Red Square". During the remainder of the Vietnam War, the Navy kill ratio climbed to 8.33 to 1. In contrast, the Air Force rate improved only slightly to 2.83 to 1. The reason for this difference was Top Gun. The Navy had revitalized its air combat training, while the Air Force had stayed stagnant. Most of the Navy MiG kills were by Top Gun graduates. In May 1973, Project HAVE IDEA was formed, which took over from the older HAVE DOUGHNUT, HAVE FERRY and HAVE DRILL projects, and the project was transferred to the Tonopah Test Range Airport. At Tonopah, testing of foreign technology aircraft continued and expanded throughout the 1970s and 1980s. Area 51 also hosted another foreign materiel evaluation program called HAVE GLIB. This involved testing Soviet tracking and missile control radar systems. A complex of actual and replica Soviet-type threat systems began to grow around "Slater Lake", a mile northwest of the main base, along with an acquired Soviet "Barlock" search radar placed at Tonopah Air Force Station. They were arranged to simulate a Soviet-style air defense complex. The Air Force began funding improvements to Area 51 in 1977 under project SCORE EVENT. In 1979, the CIA transferred jurisdiction of the Area 51 site to the Air Force Flight Test Center at Edwards AFB, California. Sam Mitchell, the last CIA commander of Area 51, relinquished command to USAF Lt. Col. Larry D. McClain. In 2017, a USAF aircraft crashed at the site, killing the pilot, Colonel Eric "Doc" Schultz. The USAF refused to release further information regarding the crash. In 2022, unconfirmed reports emerged that the crash involved an SU-27 that was part of the classified Foreign Materials Exploitation program. The reports claimed that the aircraft suffered a technical issue that resulted in both crew members ejecting from the aircraft, resulting in the death of Schultz. Have Blue/F-117 program The Lockheed Have Blue prototype stealth fighter (a smaller proof-of-concept model of the F-117 Nighthawk) first flew at Groom in December 1977. In 1978, the Air Force awarded a full-scale development contract for the F-117 to Lockheed Corporation's Advanced Development Projects. On 17 January 1981 the Lockheed test team at Area 51 accepted delivery of the first full-scale development (FSD) prototype 79–780, designated YF-117A. At 6:05 am on 18 June 1981 Lockheed Skunk Works test pilot Hal Farley lifted the nose of YF-117A 79–780 off the runway of Area 51. Meanwhile, Tactical Air Command (TAC) decided to set up a group-level organization to guide the F-117A to an initial operating capability. That organization became the 4450th Tactical Group (Initially designated "A Unit"), which officially activated on 15 October 1979 at Nellis AFB, Nevada, although the group was physically located at Area 51. The 4450th TG also operated the A-7D Corsair II as a surrogate trainer for the F-117A, and these operations continued until 15 October 1982 under the guise of an avionics test mission. Flying squadrons of the 4450th TG were the 4450th Tactical Squadron (Initially designated "I Unit") activated on 11 June 1981, and 4451st Tactical Squadron (Initially designated "P Unit") on 15 January 1983. The 4450th TS, stationed at Area 51, was the first F-117A squadron, while the 4451st TS was stationed at Nellis AFB and was equipped with A-7D Corsair IIs painted in a dark motif, tail coded "LV". Lockheed test pilots put the YF-117 through its early paces. A-7Ds were used for pilot training before any F-117As had been delivered by Lockheed to Area 51, later the A-7D's were used for F-117A chase testing and other weapon tests at the Nellis Range. On 15 October 1982, Major Alton C. Whitley Jr. became the first USAF 4450th TG pilot to fly the F-117A. Although ideal for testing, Area 51 was not a suitable location for an operational group, so a new covert base had to be established for F-117 operations. Tonopah Test Range Airport was selected for operations of the first USAF F-117 unit, the 4450th Tactical Group (TG). From October 1979, the Tonopah Airport base was reconstructed and expanded. The 6,000-foot runway was lengthened to 10,000 feet. Taxiways, a concrete apron, a large maintenance hangar, and a propane storage tank were added. By early 1982, four more YF-117As were operating at the base. After finding a large scorpion in their offices, the testing team (Designated "R Unit") adopted it as their mascot and dubbed themselves the "Baja Scorpions". Testing of a series of ultra-secret prototypes continued at Area 51 until mid-1981 when testing transitioned to the initial production of F-117 stealth fighters. The F-117s were moved to and from Area 51 by C-5 during darkness to maintain security. The aircraft were defueled, disassembled, cradled, and then loaded aboard the C-5 at night, flown to Lockheed, and unloaded at night before reassembly and flight testing. Groom performed radar profiling, F-117 weapons testing, and training of the first group of frontline USAF F-117 pilots. While the "Baja Scorpions" were working on the F-117, there was also another group at work in secrecy, known as "the Whalers" working on Tacit Blue. A fly-by-wire technology demonstration aircraft with curved surfaces and composite material, to evade radar, was a prototype, and never went into production. Nevertheless, this strange-looking aircraft was responsible for many of the stealth technology advances that were used on several other aircraft designs, and had a direct influence on the B-2; with the first flight of Tacit Blue being performed on 5February 1982, by Northrop Grumman test pilot, Richard G. Thomas. Production FSD airframes from Lockheed were shipped to Area 51 for acceptance testing. As the Baja Scorpions tested the aircraft with functional check flights and L.O. verification, the operational airplanes were then transferred to the 4450th TG. On 17 May 1982, the move of the 4450th TG from Groom Lake to Tonopah was initiated, with the final components of the move completed in early 1983. Production FSD airframes from Lockheed were shipped to Area 51 for acceptance testing. As the Baja Scorpions tested the aircraft with functional check flights and L.O. verification, the operational airplanes were then transferred to the 4450th TG at Tonopah. The R-Unit was inactivated on 30 May 1989. Upon inactivation, the unit was reformed as Detachment 1, 57th Fighter Weapons Wing (FWW). In 1990, the last F-117A (843) was delivered from Lockheed. After completion of acceptance flights at Area 51 of this last new F-117A aircraft, the flight test squadron continued flight test duties of refurbished aircraft after modifications by Lockheed. In February/March 1992 the test unit moved from Area 51 to the USAF Palmdale Plant 42 and was integrated with the Air Force Systems Command 6510th Test Squadron. Some testing, especially RCS verification and other classified activity was still conducted at Area 51 throughout the operational lifetime of the F-117. The recently inactivated (2008) 410th Flight Test Squadron traces its roots, if not its formal lineage to the 4450th TG R-unit. Later operations Since the F-117 became operational in 1983, operations at Groom Lake have continued. The base and its associated runway system were expanded, including the expansion of housing and support facilities. In 1995, the federal government expanded the exclusionary area around the base to include nearby mountains that had hitherto afforded the only decent overlook of the base, prohibiting access to of land formerly administered by the Bureau of Land Management. On 22 October 2015, a federal judge signed an order giving land that belonged to a Nevada family since the 1870s to the United States Air Force for expanding Area 51. According to the judge, the land that overlooked the base was taken to address security and safety concerns connected with their training and testing. Legal status U.S. government's positions on Area 51 The United States government has provided minimal information regarding Area 51. The area surrounding the lake is permanently off-limits to both civilian and normal military air traffic. Security clearances are checked regularly; cameras and weaponry are not allowed. Even military pilots training in the NAFR risk disciplinary action if they stray into the exclusionary "box" surrounding Groom's airspace. Surveillance is supplemented using buried motion sensors. Area 51 is a common destination for Janet, a small fleet of passenger aircraft operated on behalf of the Air Force to transport military personnel, primarily from Harry Reid International Airport. The United States Geological Survey (USGS) topographic map for the area only shows the long-disused Groom Mine, but USGS aerial photographs of the site in 1959 and 1968 were publicly available. A civil aviation chart published by the Nevada Department of Transportation shows a large restricted area, defined as part of the Nellis restricted airspace. The National Atlas shows the area as lying within the Nellis Air Force Base. There are higher resolution and newer images available from other satellite imagery providers, including Russian providers and the IKONOS. These show the runway markings, base facilities, aircraft, and vehicles. In 1998 USAF officially acknowledged the site's existence. On 25 June 2013, the CIA released an official history of the U-2 and OXCART projects which acknowledged that the U-2 was tested at Area 51, in response to a Freedom of Information Act request submitted in 2005 by Jeffrey T. Richelson of George Washington University's National Security Archive. It contains numerous references to Area 51 and Groom Lake, along with a map of the area. Media reports stated that releasing the CIA history was the first governmental acknowledgement of Area 51's existence; rather, it was the first official acknowledgement of specific activity at the site. Environmental lawsuit In 1994, five unnamed civilian contractors and the widows of contractors Walter Kasza and Robert Frost sued the Air Force and the United States Environmental Protection Agency. They alleged that they had been present when large quantities of unknown chemicals had been burned in open pits and trenches at Groom. Rutgers University biochemists analyzed biopsies from the complainants and found high levels of dioxin, dibenzofuran, and trichloroethylene in their body fat. The complainants alleged that they had sustained skin, liver, and respiratory injuries due to their work at Groom and that this had contributed to the deaths of Frost and Kasza. The suit sought compensation for the injuries, claiming that the Air Force had illegally handled toxic materials and that the EPA had failed in its duty to enforce the Resource Conservation and Recovery Act which governs the handling of dangerous materials. They also sought detailed information about the chemicals, hoping that this would facilitate the medical treatment of survivors. Congressman Lee H. Hamilton, former chairman of the House Intelligence Committee, told 60 Minutes reporter Lesley Stahl, "The Air Force is classifying all information about Area 51 in order to protect themselves from a lawsuit." The government invoked the State Secrets Privilege and petitioned U.S. District Judge Philip Pro to disallow disclosure of classified documents or examination of secret witnesses, claiming that this would expose classified information and threaten national security. Judge Pro rejected the government's argument, so President Bill Clinton issued a Presidential Determination exempting what it called "the Air Force's Operating Location Near Groom Lake, Nevada" from environmental disclosure laws. Consequently, Pro dismissed the suit due to lack of evidence. Turley appealed to the U.S. Court of Appeals for the Ninth Circuit on the grounds that the government was abusing its power to classify material. Secretary of the Air Force Sheila E. Widnall filed a brief which stated that disclosures of the materials present in the air and water near Groom "can reveal military operational capabilities or the nature and scope of classified operations." The Ninth Circuit rejected Turley's appeal and the U.S. Supreme Court refused to hear it, putting an end to the complainants' case. The President annually issues a determination continuing the Groom exception which is the only formal recognition that the government has ever given that Groom Lake is more than simply another part of the Nellis complex. An unclassified memo on the safe handling of F-117 Nighthawk material was posted on an Air Force web site in 2005. This discussed the same materials for which the complainants had requested information, which the government had claimed was classified. The memo was removed shortly after journalists became aware of it. Civil aviation identification In December 2007, airline pilots noticed that the base had appeared in their aircraft navigation systems' latest Jeppesen database revision with the ICAO airport identifier code of KXTA and listed as "Homey Airport". The probably inadvertent release of the airport data led to advice by the Aircraft Owners and Pilots Association (AOPA) that student pilots should be explicitly warned about KXTA, not to consider it as a waypoint or destination for any flight even though it now appears in public navigation databases. Security The perimeter of the base is marked out by orange posts and patrolled by guards in white pickup trucks and camouflage fatigues. The guards are popularly referred to as "camo dudes" by enthusiasts. The guards will not answer questions about their employers; however, according to the New York Daily News, there are indications they are employed through a contractor such as AECOM. Signage around the base perimeter advises that deadly force is authorized against trespassers. Technology is also heavily used to maintain the border of the base; this includes surveillance cameras and motion detectors. Some of these motion detectors are placed some distance away from the base on public land to notify guards of people approaching. 1974 Skylab photography Dwayne A. Day published "Astronauts and Area 51: the Skylab Incident" in The Space Review in January 2006. It was based on a memo written in 1974 to CIA director William Colby by an unknown CIA official. The memo reported that astronauts on board Skylab had inadvertently photographed a certain location: The name of the location was obscured, but the context led Day to believe that the subject was Groom Lake. Day wrote that "the CIA considered no other spot on Earth to be as sensitive as Groom Lake". Even within the agency's National Photographic Interpretation Center that handled classified reconnaissance satellite photographs, images of the site were removed from film rolls and stored separately as not all photo interpreters had security clearance for the information. The memo details debate between federal agencies regarding whether the images should be classified, with Department of Defense agencies arguing that it should and NASA and the State Department arguing that it should not be classified. The memo itself questions the legality of retroactively classifying unclassified images. The memo includes handwritten remarks, apparently by Director of Central Intelligence Colby: The declassified documents do not disclose the outcome of discussions regarding the Skylab imagery. The debate proved moot, as the photograph appeared in the Federal Government's Archive of Satellite Imagery along with the remaining Skylab photographs. 2019 shooting incident On 28 January 2019, an unidentified man drove through a security checkpoint near Mercury, Nevada, in an apparent attempt to enter the base. After an vehicle pursuit by base security, the man exited his vehicle carrying a "cylindrical object" and was shot dead by NNSS security officers and sheriff's deputies after refusing to obey requests to halt. There were no other injuries reported. UFO and other conspiracy theories Area 51 has become a focus of modern conspiracy theories due to its secretive nature and connection to classified aircraft research. Theories include: The storage, examination, and reverse engineering of crashed alien spacecraft, including material supposedly recovered at Roswell, the study of their occupants, and the manufacture of aircraft based on alien technology Meetings or joint undertakings with extraterrestrials The development of exotic energy weapons for the Strategic Defense Initiative (SDI) or other weapons programs The development of weather control The development of time travel and teleportation technology The development of exotic propulsion systems related to the Aurora Program Activities related to a shadowy one-world government or the Majestic 12 organization Many of the hypotheses concern underground facilities at Groom or at Papoose Lake (also known as "S-4 location"), south, and include claims of a transcontinental underground railroad system, a disappearing airstrip nicknamed the "Cheshire Airstrip", after Lewis Carroll's Cheshire cat, which briefly appears when water is sprayed onto its camouflaged asphalt, and engineering based on alien technology. In the mid-1950s, civilian aircraft flew under 20,000 feet while military aircraft flew up to 40,000 feet. The U-2 began flying above 60,000 feet and there was an increasing number of UFO sighting reports. Sightings occurred most often during early evening hours, when airline pilots flying west saw the U-2's silver wings reflect the setting sun, giving the aircraft a "fiery" appearance. Many sighting reports came to the Air Force's Project Blue Book, which investigated UFO sightings, through air-traffic controllers and letters to the government. The project checked U-2 and later OXCART flight records to eliminate the majority of UFO reports that it received during the late 1950s and 1960s, although it could not reveal to the letter writers the truth behind what they saw. Similarly, veterans of experimental projects such as OXCART at Area 51 agree that their work inadvertently prompted many of the UFO sightings and other rumors: They believe that the rumors helped maintain secrecy over Area 51's actual operations. The veterans deny the existence of a vast underground railroad system, although many of Area 51's operations did occur underground. Bob Lazar claimed in 1989 that he had worked at Area 51's "Sector Four (S-4)", said to be located underground inside the Papoose Range near Papoose Lake. He claimed that he was contracted to work with alien spacecraft that the government had in its possession. Similarly, the 1996 documentary Dreamland directed by Bruce Burgess included an interview with a 71-year-old mechanical engineer who claimed to be a former employee at Area 51 during the 1950s. His claims included that he had worked on a "flying disc simulator" which had been based on a disc originating from a crashed extraterrestrial craft and was used to train pilots. He also claimed to have worked with an extraterrestrial being named "J-Rod" and described as a "telepathic translator". In 2004, Dan Burisch (pseudonym of Dan Crain) claimed to have worked on cloning alien viruses at Area 51, also alongside the alien named "J-Rod". Burisch's scholarly credentials are the subject of much debate, as he was apparently working as a Las Vegas parole officer in 1989 while also earning a PhD at State University of New York (SUNY). In July 2019, more than 2,000,000 people responded to a joke proposal to storm Area 51 which appeared in an anonymous Facebook post. The event, scheduled for 20 September 2019, was billed as "Storm Area 51, They Can't Stop All of Us", an attempt to "see them aliens". Air Force spokeswoman Laura McAndrews said the government "would discourage anyone from trying to come into the area where we train American armed forces". Two music festivals in rural Nevada, AlienStock and Storm Area 51 Basecamp, were subsequently organized to capitalize on the popularity of the original Facebook event. Between 1,500 and 3,000 people showed up at the festivals, while over 150 people made the journey over several miles of rough roads to get near the gates to Area 51. Seven people were reportedly arrested at the event. See also Area 52 (disambiguation) Black operation Black project Black site List of United States Air Force installations Special access program Footnotes Citations Sources Darlington, David (1998). Area 51: The Dreamland Chronicles. New York: Henry Holt. Patton, Phil (1998). Dreamland: Travels Inside the Secret World of Roswell and Area 51. New York: Villard/Random House Stahl, Lesley "Area 51 / Catch 22" 60 Minutes CBS Television 17 March 1996, a US TV news magazine's segment about the environmental lawsuit. External links Las Vegas sectional aeronautical chart, centered on Groom Lake (Federal Aviation Administration – SkyVector.com) 1942 establishments in Nevada Buildings and structures in Lincoln County, Nevada Conspiracy theories in the United States Installations of the Central Intelligence Agency Military installations in Nevada Research installations of the United States Air Force Secret places in the United States UFO conspiracy theories UFO culture in the United States
2322
https://en.wikipedia.org/wiki/Audio%20signal%20processing
Audio signal processing
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for perceptual coding and is widely used in speech coding, while MDCT coding is widely used in modern audio coding formats such as MP3 and Advanced Audio Coding (AAC). Types Analog An analog audio signal is a continuous signal represented by an electrical voltage or current that is analogous to the sound waves in the air. Analog signal processing then involves physically altering the continuous signal by changing the voltage or current or charge via electrical circuits. Historically, before the advent of widespread digital technology, analog was the only method by which to manipulate a signal. Since that time, as computers and software have become more capable and affordable, digital signal processing has become the method of choice. However, in music applications, analog technology is often still desirable as it often produces nonlinear responses that are difficult to replicate with digital filters. Digital A digital representation expresses the audio waveform as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as digital signal processors, microprocessors and general-purpose computers. Most modern audio systems use a digital approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing. Applications Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and reverb removal or addition, etc.). Audio broadcasting Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with medium wave and shortwave broadcasting), and adjust overall loudness to the desired level. Active noise control Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference. Audio synthesis Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human speech using speech synthesis. Audio effects Audio effects alter the sound of a musical instrument or other audio source. Common effects include distortion, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces. Musicians, audio engineers and record producers use effects units during live performances or in the studio, typically with electric guitar, bass guitar, electronic keyboard or electric piano. While effects are most frequently used with electric or electronic instruments, they can be used with any audio source, such as acoustic instruments, drums, and vocals. Computer audition See also Sound card Sound effect References Further reading Audio electronics Signal processing
2323
https://en.wikipedia.org/wiki/Amdahl%27s%20law
Amdahl's law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967. Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours to complete using a single thread, but a one-hour portion of the program cannot be parallelized, therefore only the remaining 19 hours' () execution time can be parallelized, then regardless of how many threads are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. Hence, the theoretical speedup is less than 20 times the single thread performance, . Definition Amdahl's law can be formulated in the following way: where Slatency is the theoretical speedup of the execution of the whole task; s is the speedup of the part of the task that benefits from improved system resources; p is the proportion of execution time that the part benefiting from improved resources originally occupied. Furthermore, shows that the theoretical speedup of the execution of the whole task increases with the improvement of the resources of the system and that regardless of the magnitude of the improvement, the theoretical speedup is always limited by the part of the task that cannot benefit from the improvement. Amdahl's law applies only to the cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case, Gustafson's law gives a less pessimistic and more realistic assessment of the parallel performance. Derivation A task executed by a system whose resources are improved compared to an initial similar system can be split up into two parts: a part that does not benefit from the improvement of the resources of the system; a part that benefits from the improvement of the resources of the system. An example is a computer program that processes files. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can. The execution time of the whole task before the improvement of the resources of the system is denoted as . It includes the execution time of the part that would not benefit from the improvement of the resources and the execution time of the one that would benefit from it. The fraction of the execution time of the task that would benefit from the improvement of the resources is denoted by . The one concerning the part that would not benefit from it is therefore . Then: It is the execution of the part that benefits from the improvement of the resources that is accelerated by the factor after the improvement of the resources. Consequently, the execution time of the part that does not benefit from it remains the same, while the part that benefits from it becomes: The theoretical execution time of the whole task after the improvement of the resources is then: Amdahl's law gives the theoretical speedup in latency of the execution of the whole task at fixed workload , which yields Parallel programs If 30% of the execution time may be the subject of a speedup, p will be 0.3; if the improvement makes the affected part twice as fast, s will be 2. Amdahl's law states that the overall speedup of applying the improvement will be: For example, assume that we are given a serial task which is split into four consecutive parts, whose percentages of execution time are , , , and respectively. Then we are told that the 1st part is not sped up, so , while the 2nd part is sped up 5 times, so , the 3rd part is sped up 20 times, so , and the 4th part is sped up 1.6 times, so . By using Amdahl's law, the overall speedup is Notice how the 5 times and 20 times speedup on the 2nd and 3rd parts respectively don't have much effect on the overall speedup when the 4th part (48% of the execution time) is accelerated by only 1.6 times. Serial programs [[File:Optimizing-different-parts.svg|thumb|400px|Assume that a task has two independent parts, A and B. Part B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this reduces the time of the whole computation only slightly. In contrast, one may need to perform less work to make part A perform twice as fast. This will make the computation much faster than by optimizing part B, even though part B'''s speedup is greater in terms of the ratio, (5 times versus 2 times).]] For example, with a serial program in two parts A and B for which and , if part B is made to run 5 times faster, that is and , then if part A is made to run 2 times faster, that is and , then Therefore, making part A to run 2 times faster is better than making part B to run 5 times faster. The percentage improvement in speed can be calculated as Improving part A by a factor of 2 will increase overall program speed by a factor of 1.60, which makes it 37.5% faster than the original computation. However, improving part B by a factor of 5, which presumably requires more effort, will achieve an overall speedup factor of 1.25 only, which makes it 20% faster. Optimizing the sequential part of parallel programs If the non-parallelizable part is optimized by a factor of , then It follows from Amdahl's law that the speedup due to parallelism is given by When , we have , meaning that the speedup is measured with respect to the execution time after the non-parallelizable part is optimized. When , If , and , then: Transforming sequential parts of parallel programs into parallelizable Next, we consider the case wherein the non-parallelizable part is reduced by a factor of , and the parallelizable part is correspondingly increased. Then It follows from Amdahl's law that the speedup due to parallelism is given by Relation to the law of diminishing returns Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates law of diminishing returns. If one picks optimally (in terms of the achieved speedup) what is to be improved, then one will see monotonically decreasing improvements as one improves. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, one can see an increase in the return. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or require larger development time than others. Amdahl's law does represent the law of diminishing returns if one is considering what sort of return one gets by adding more processors to a machine, if one is running a fixed-size computation that will use all available processors to their capacity. Each new processor added to the system will add less usable power than the previous one. Each time one doubles the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of 1/(1 − p''). This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth. If these resources do not scale with the number of processors, then merely adding processors provides even lower returns. An implication of Amdahl's law is that to speed up real applications which have both serial and parallel portions, heterogeneous computing techniques are required. There are novel speedup and energy consumption models based on a more general representation of heterogeneity, referred to as the normal form heterogeneity, that support a wide range of heterogeneous many-core architectures. These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels. See also Gustafson's law Universal Law of Computational Scalability Analysis of parallel algorithms Critical path method Moore's law References Further reading External links . Amdahl discusses his graduate work at the University of Wisconsin and his design of WISC. Discusses his role in the design of several computers for IBM including the STRETCH, IBM 701, and IBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process. Mentions work with Ramo-Wooldridge, Aeronutronic, and Computer Sciences Corporation Amdahl's Law: Not all performance improvements are created equal (2007) "Amdahl's Law" by Joel F. Klein, Wolfram Demonstrations Project (2007) Amdahl's Law in the Multicore Era (July 2008) What the $#@! is Parallelism, Anyhow? (Charles Leiserson, May 2008) Evaluation of the Intel Core i7 Turbo Boost feature, by James Charles, Preet Jassi, Ananth Narayan S, Abbas Sadat and Alexandra Fedorova (2009) Calculation of the acceleration of parallel programs as a function of the number of threads, by George Popov, Valeri Mladenov and Nikos Mastorakis (January 2010) Danny Hillis - Proving Amdahl's Law wrong, video recorded October 2016 Analysis of parallel algorithms Computer architecture statements
2332
https://en.wikipedia.org/wiki/AD%20%28disambiguation%29
AD (disambiguation)
AD (Anno Domini) is a designation used to label years following 1 BC in the Julian and Gregorian calendars while Ad (advertisement) is a form of marketing communication. AD, A.D. or Ad may also refer to: Art, entertainment, and media Film and television A.D. (film), a 2010 animated zombie horror film A.D. (miniseries), a 1985 television miniseries set in ancient Rome A.D. The Bible Continues, a 2015 biblical drama television miniseries Arrested Development, an American television sitcom Attarintiki Daredi, 2013 Indian film by Trivikram Srinivas Audio description, a service for visually impaired audience on some TV programs Music AD (band), a Christian rock band A.D. (album), by Solace Publications A.D.: New Orleans After the Deluge, a nonfiction graphic novel about Hurricane Katrina Algemeen Dagblad, a Dutch newspaper Architectural Digest, an interior design and landscaping magazine AD (poem), by Kenneth Fearing Other art, entertainment, and media Audio description track, a narration track for visually impaired consumers of visual media Brands and enterprises Alexander Dennis, a British bus manufacturer Akcionersko društvo (aкционерско друштво), a Macedonian name for a type of company Aktsionerno drujestvo (акционерно дружество), a Bulgarian name for a type of company akcionarsko društvo (aкционарско друштво), a Serbian name for a type of company Analog Devices, a semiconductor company Military Accidental discharge, a mechanical failure of a firearm causing it to fire Active duty, a status of full duty or service, usually in the armed forces Air defense, anti-aircraft weaponry and systems Air Department, part of the British Admiralty Destroyer tender, a type of support ship (US Navy hull classification symbol AD) AD Skyraider, former name of the Douglas A-1 Skyraider, a Navy attack aircraft Organizations Action Directe, French far-left militant group Democratic Action (Venezuela) (Spanish: Acción Democrática), social democratic and center-left political party Democratic Alliance (Portugal) (Portuguese: Aliança Democrática), a former centre-right political alliance Democratic Alternative (Malta) (Maltese: Alternattiva Demokratika), a green political party People Ad (name), a given name, and a list of people with the name ‘Ad, great-grandson of Shem, son of Noah Anthony Davis (born 1993), American basketball player Antonio Davis (born 1968), American basketball player A. D. Loganathan (1888–1949), officer of the Indian National Army A. D. Whitfield (born 1943), American football player A. D. Winans (born 1936), American poet, essayist, short story writer and publisher A.D., nickname of Adrian Peterson (born 1985), American football player Places AD, ISO 3166-1 country for Andorra Abu Dhabi, capital of the United Arab Emirates AD, herbarium code for the State Herbarium of South Australia Professions Art director, for a magazine or newspaper Assistant director, a film or television crew member Athletic director, the administrator of an athletics program Science and technology Biology and medicine Addison's disease, an endocrine disorder Adenovirus, viruses of the family Adenoviridae Alzheimer's disease, a neurodegenerative disease Anaerobic digestion, processes by which microorganisms break down biodegradable material Anti-diarrheal, medication which provides symptomatic relief for diarrhea Approximate digestibility, an index measure of the digestibility of animal feed Atopic dermatitis, form of skin inflammation Atypical depression, a type of depression Autosomal dominant, a classification of genetic traits Autonomic dysreflexia, a potential medical emergency Chemistry Adamantyl, abbreviated "Ad" in organic chemistry Sharpless asymmetric dihydroxylation, a type of organic chemical reaction Computing .ad, the top level domain for Andorra Administrative distance, a metric in routing Active Directory, software for the management of Microsoft Windows domains Administrative domain, a computer networking facility Analog-to-digital converter, a type of electronic circuit Automatic differentiation, a set of computer programming techniques to speedily compute derivatives AD16, the hexadecimal number equal to decimal number 173 Mathematics Adjoint representation of a Lie group, abbreviated "Ad" in mathematics Axiom of determinacy, a set theory axiom Physics Antiproton Decelerator, a device at the CERN physics laboratory Autodynamics, a physics theory Other uses in science and technology Active disassembly, a technology supporting the cost-effective deconstruction of complex materials Transportation AD, IATA code for: Air Paradise, a defunct Indonesian airline Azul Brazilian Airlines Airworthiness Directive, an aircraft maintenance requirement notice Other uses ʿĀd, an ancient Arab tribe, mentioned in the Quran Aggregate demand, in macroeconomics Anno Diocletiani, an alternative year numbering system United States Academic Decathlon, a high school academic competition See also Anno Domini (disambiguation) BC (disambiguation) Domino (disambiguation) Dominus (disambiguation)
2341
https://en.wikipedia.org/wiki/Alkaloid
Alkaloid
Alkaloids are a class of basic, naturally occurring organic compounds that contain at least one nitrogen atom. This group also includes some related compounds with neutral and even weakly acidic properties. Some synthetic compounds of similar structure may also be termed alkaloids. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. More rarely still, they may contain elements such as phosphorus, chlorine, and bromine. Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. piperine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste. The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines. Naming The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s. There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids. History Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times. Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows. Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac. A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified. The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium. Classifications Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete. More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes. Alkaloids are often divided into the following major groups: "True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids. "Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine. Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine. Peptide and cyclopeptide alkaloids. Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination. Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids. Main classes of monomeric alkaloids are listed in the table below: Properties Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange). Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate. Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia (see picture). After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine. Distribution in nature Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants. The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids. Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids. Extraction Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts. Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved. In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above. Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation. A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers. Biosynthesis Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction. Synthesis of Schiff bases Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds. In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine: Mannich reaction An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl. The Mannich reaction can proceed both intermolecularly and intramolecularly: Dimer alkaloids In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms: Mannich reaction, resulting in, e.g., voacamine Michael reaction (villalstonine) Condensation of aldehydes with amines (toxiferine) Oxidative addition of phenols (dauricine, tubocurarine) Lactonization (carpaine). There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves. Biological role Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing. Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world. Applications In medicine Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following: Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium. In agriculture Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans. Use as psychoactive drugs Preparations of plants containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers. There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone. See also Amine Base (chemistry) List of poisonous plants Mayer's reagent Natural products Palau'amine Secondary metabolite Explanatory notes Citations General and cited references External links
2346
https://en.wikipedia.org/wiki/Albion%2C%20Michigan
Albion, Michigan
Albion is a city in Calhoun County in the south central region of the Lower Peninsula of the U.S. state of Michigan. The population was 7,700 at the 2020 census and is part of the Battle Creek Metropolitan Statistical Area. The earliest English-speaking settlers also referred to this area as The Forks, because it is situated at the confluence of the north and south branches of the Kalamazoo River. In the early 20th century, immigrants came to Albion from a variety of eastern European nations, including the current Lithuania and Russia. More recently, Hispanic or Latino immigrants have come from Mexico and Central America. The Festival of the Forks has been held annually since 1967 to celebrate Albion's diverse ethnic heritage. Since the 19th century, several major manufacturers were established here and Albion became known as a factory town. This has changed with the closure of several manufacturers. In the 21st century, Albion's culture is changing to that of a college town whose residents have a strong interest in technology and sustainability issues. Albion College is a private liberal arts college with a student population of about 1,250. Albion is a sister city with Noisy-le-Roi, France. History The first European-American settler, Tenney Peabody, arrived in 1833 along with his brother-in-law Charles Blanchard, and another young man named Clark Dowling. Peabody's family followed soon after. In 1835, the Albion Company, a land development company formed by Jesse Crowell, platted a village. Peabody's wife was asked to name the settlement. She considered the name "Peabodyville", but selected "Albion" instead, after the former residence of Jesse Crowell. Crowell was appointed in 1838 as the first US postmaster here. Many early settlers migrated here from western New York and New England, part of a movement after construction of the Erie Canal, and opening of new lands in Michigan and other Great Lakes territories. They first developed agriculture and it became a rural trading village. Settlers were strong supporters of education and in 1835, Methodists established Albion College affiliated with their church. The first classes were held in Albion in 1843. The college was known by a few other names before 1861. At that time it was fully authorized to confer four-year degrees on both men and women. Albion incorporated as a village in 1855, following construction of the railroad here in 1852, which stimulated development. It became a city in 1885. Mills were constructed to operate on the water power of the forks of the Kalamazoo River. They were the first industry in the town, used to process lumber, grain and other products to build the village. Albion quickly became a mill town as well as an agricultural market. The river that powered industry also flooded the town. In the Great Flood of 1908, there was severe property damage. In February, snowstorms had deposited several feet of snow across the region. Heavy rains and warmer conditions in early March created water saturation in the ground and risk of flooding because of the high flow in the rivers. After the Homer Dam broke around 3 p.m. on March 7, the Kalamazoo River flooded Albion. By midnight, the bridges surrounding town were underwater. Six buildings in Albion collapsed, resulting in more than US$125,000 in damage (1908 dollars). The town struggled to recover. In the late 19th and early 20th centuries, numerous Lithuanian and other Eastern European immigrants settled here, most working for the Albion Malleable Iron Company, and some in the coal mine north of town. The iron company had initially made agricultural implements but around World War I shifted to making automotive parts. The Malleable merged in 1969 with the Hayes Corporation, becoming the Hayes-Albion Corporation. Now known as a division of Harvard Industries, the company continues to produce automotive castings in Albion. Molder Statue Park in downtown is dedicated to the many molders who dealt with molten iron. There were soon enough Lithuanians in town to establish Holy Ascension Orthodox Church, which they built in 1916. It is part of the Orthodox Church in America. Today its services are in English. The city had its peak of population in 1960. In 1973 Albion was named an All-America City by the National Civic League. It celebrated winning the award on May 15, 1974, when the Governor of Michigan, William Milliken, and many dignitaries came to town. In 1975 the closure of a major factory began a difficult period of industrial restructuring and decline in jobs and population. Since that time citizens have mobilized, founding the Albion Community Foundation in 1968. They formed the Albion Volunteer Service Organization in the 1980s, with support from Albion College, to address the challenge of diminishing economic opportunity. Key to the City Honor Bestowed: 1964: Aunt Jemima visited Albion on January 25. 1960s: Columnist Ann Landers was presented with a key upon her visit to Starr Commonwealth for Boys. Law and government Albion has a Council-Manager form of government. City residents elect a Mayor at-large and City Council members from each of six single-member districts. The council in turn selects a City Manager to handle day-to-day affairs of the city. The mayor presides over and is a voting member of the council. Council members are elected to four-year terms, staggered every two years. A mayor is elected every two years. The city levies an income tax of 1 percent on residents and 0.5 percent on nonresidents. Geography According to the United States Census Bureau, the city has a total area of , of which is land and is water. Albion is positioned 42.24 degrees north of the equator and 84.75 degrees west of the prime meridian. Climate Demographics 2010 population by gender/age 2010 population by ethnicity 2010 population by race Transportation Major highways Rail Amtrak, the national passenger rail system, provides daily service to Albion, operating its Wolverine both directions between Chicago, Illinois and Pontiac, Michigan, via Detroit. Bus Greyhound Lines provides daily intercity city bus service to Albion between Chicago, Illinois and Detroit. Notable people Kim Cascone, musician, composer, owner of Silent Records; born in Albion M. F. K. Fisher, food writer, born in Albion Ada Iddings Gale, author, lived and buried in Albion Frank Joranko, football player and coach for Albion College LaVall Jordan, head men's basketball coach for Butler University, born in Albion Martin Wells Knapp, American Methodist evangelist who founded the Pilgrim Holiness Church and God's Bible School and College, born in Albion. Bill Laswell, jazz bassist, record producer and record label owner; raised in Albion Jerome D. Mack, banker, director of Las Vegas hotels Riviera and Dunes, founder of University of Nevada, Las Vegas; born in Albion Deacon McGuire, professional baseball player for 26 seasons, lived in Albion Gary Lee Nelson, composer, pioneer in electronic and computer music; grew up in Albion John Sinclair, poet and political activist, attended Albion College Jon Scieszka, children's author, attended Albion College Brian Tyler, racing driver, born in Albion Jack Vaughn, Assistant Secretary of State, Ambassador to Panama and Colombia, and Director of the Peace Corps (1966-1969); grew up in Albion The War and Treaty, musical duo See also Holy Ascension Orthodox Church References External links City of Albion official website Albion City Information Page Albion District Library Albion Michigan Home Page Historical Albion Michigan Festival of the Forks – Albion's annual music and food festival by the forks of the Kalamazoo River The Greater Albion Chamber of Commerce Albion Michigan Community Foundation – For Good. For Ever. Cities in Calhoun County, Michigan Populated places established in 1835 1835 establishments in Michigan Territory
2357
https://en.wikipedia.org/wiki/American%20Football%20League
American Football League
The American Football League (AFL) was a major professional American football league that operated for ten seasons from 1960 until 1970, when it merged with the older National Football League (NFL), and became the American Football Conference. The upstart AFL operated in direct competition with the more established NFL throughout its existence. It was more successful than earlier rivals to the NFL with the same name, the 1926, 1936 and 1940 leagues, and the later All-America Football Conference (which existed between 1944 and 1950 but only played between 1946 and 1949). This fourth version of the AFL was the most successful, created by a number of owners who had been refused NFL expansion franchises or had minor shares of NFL franchises. The AFL's original lineup consisted of an Eastern division of the New York Titans, Boston Patriots, Buffalo Bills, and the Houston Oilers, and a Western division of the Los Angeles Chargers, Denver Broncos, Oakland Raiders, and Dallas Texans. The league first gained attention by signing 75% of the NFL's first-round draft choices in 1960, including Houston's successful signing of college star and Heisman Trophy winner Billy Cannon. While the first years of the AFL saw uneven competition and low attendance, the league was buttressed by a generous television contract with the American Broadcasting Company (ABC), followed by a contract with the competing National Broadcasting Company (NBC) for games starting with the 1965 season, that broadcast the more offense-oriented football league nationwide. Continuing to attract top talent from colleges and the NFL by the mid-1960s, as well as successful franchise shifts of the Chargers from L.A. south to San Diego and the Texans north to Kansas City (becoming the Kansas City Chiefs), the AFL established a dedicated following. The transformation of the struggling Titans into the New York Jets under new ownership, including the signing of University of Alabama star quarterback Joe Namath, further solidified the league's reputation among the major media. As fierce competition made player salaries skyrocket in both leagues, especially after a series of "raids", the leagues agreed to a merger in 1966. Among the conditions were a common draft and a championship game played between the two league champions first played in early 1967, which would eventually become known as the Super Bowl. The AFL and NFL operated as separate leagues until 1970, with separate regular season and playoff schedules except for the championship game. NFL Commissioner Pete Rozelle also became chief executive of the AFL from July 26, 1966, through the completion of the merger. During this time the AFL expanded, adding the Miami Dolphins and Cincinnati Bengals. After losses by the Kansas City Chiefs and Oakland Raiders in the first two AFL-NFL World Championship Games to the Green Bay Packers (1966–67), the New York Jets and Chiefs won Super Bowls III and IV (1968–69) respectively, cementing the league's claim to being an equal to the NFL. In 1970, the AFL was absorbed into the NFL. The ten AFL franchises joined three existing NFL teams- the Baltimore Colts, the Cleveland Browns, and the Pittsburgh Steelers-to form the merged league's American Football Conference. League history During the 1950s, the National Football League had grown to rival Major League Baseball as one of the most popular professional sports leagues in the United States. One franchise that did not share in this newfound success of the league was the Chicago Cardinals – owned by the Bidwill family – who had become overshadowed by the more popular Chicago Bears. The Bidwills hoped to move their franchise, preferably to St. Louis, but could not come to terms with the league, which demanded money before it would approve the move. Needing cash, the Bidwills began entertaining offers from would-be investors, and one of the men who approached the Bidwills was Lamar Hunt, son and heir of millionaire oilman H. L. Hunt. Hunt offered to buy the Cardinals and move them to Dallas, where he had grown up. However, these negotiations came to nothing, since the Bidwills insisted on retaining a controlling interest in the franchise and were unwilling to move their team to a city where a previous NFL franchise had failed in . While Hunt negotiated with the Bidwills, similar offers were made by Bud Adams, Bob Howsam, and Max Winter. When Hunt, Adams, and Howsam were unable to secure a controlling interest in the Cardinals, they approached NFL commissioner Bert Bell and proposed the addition of expansion teams. Bell, wary of expanding the 12-team league and risking its newfound success, rejected the offer. On his return flight to Dallas, Hunt conceived the idea of an entirely new league and decided to contact the others who had shown interest in purchasing the Cardinals. In addition to Adams, Howsam, and Winter, Hunt reached out to Bill Boyer, Winter's business partner, to gauge their interest in starting a new league. Hunt's first meeting with Adams was held in March 1959. Hunt, who felt a regional rivalry would be critical for the success of the new league, convinced Adams to join and found his team in Houston. Hunt next secured an agreement from Howsam to bring a team to Denver. After Winter and Boyer agreed to start a team in Minneapolis-Saint Paul, the new league had its first four teams. Hunt then approached Willard Rhodes, who hoped to bring pro football to Seattle. However, not wanting to undermine its own brand, the University of Washington was unwilling to let the fledgling league use Husky Stadium, and Rhodes' effort came to nothing (Seattle would later get a pro football team of its own). Hunt also sought franchises in Los Angeles, Buffalo and New York City. During the summer of 1959, he sought the blessings of the NFL for his nascent league, as he did not seek a potentially costly rivalry. Within weeks of the July 1959 announcement of the league's formation, Hunt received commitments from Barron Hilton and Harry Wismer to bring teams to Los Angeles and New York, respectively. His initial efforts for Buffalo, however, were rebuffed, when Hunt's first choice of owner, Pat McGroder, declined to take part; McGroder had hoped that the threat of the AFL would be enough to prompt the NFL to expand to Buffalo. On August 14, 1959, the first league meeting was held in Chicago, and charter memberships were given to Dallas, New York, Houston, Denver, Los Angeles, and Minneapolis-Saint Paul. On August 22, the league officially was named the American Football League at a meeting in Dallas. The NFL's initial reaction was not as openly hostile as it had been with the earlier All-America Football Conference (AAFC), as Bell had even given his public approval; but he died suddenly in October 1959, and individual NFL owners soon began a campaign to undermine the new league. AFL owners were approached with promises of new NFL franchises or ownership stakes in existing ones. Only the party from Minneapolis-Saint Paul accepted, and with the addition of Ole Haugsrud and Bernie Ridder the Minnesota group joined the NFL in 1961 as the Minnesota Vikings. The older league also announced on August 29 that it had conveniently reversed its position against expansion, and planned to bring new NFL teams to Houston and Dallas, to start play in 1961. (The NFL did not expand to Houston at that time, the promised Dallas team – the Dallas Cowboys – actually started play in 1960, and the Vikings began play in 1961.) Finally, the NFL quickly came to terms with the Bidwills and allowed them to relocate the struggling Cardinals to St. Louis, eliminating that city as a potential AFL market. Ralph Wilson, who owned a minority interest in the NFL's Detroit Lions at the time, initially announced he was placing a team in Miami, but like the Seattle situation, was also rebuffed by local ownership (like Seattle, Miami would later get a pro football team of its own as well); given five other choices, Wilson negotiated with McGroder and brought the team that became the Bills to Buffalo. Buffalo was officially awarded its franchise on October 28. During a league meeting on November 22, a 10-man ownership group from Boston (led by Billy Sullivan) was awarded the AFL's eighth team. On November 30, 1959, Joe Foss, a World War II Marine fighter ace and former governor of South Dakota, was named the AFL's first commissioner. Foss commissioned a friend of Harry Wismer's to develop the AFL's eagle-on-football logo. Hunt was elected President of the AFL on January 26, 1960. The AFL draft The AFL's first draft took place the same day Boston was awarded its franchise, and lasted 33 rounds. The league held a second draft on December 2, which lasted for 20 rounds. Because the Oakland Raiders joined after the initial AFL drafts, they inherited Minnesota's selections. A special allocation draft was held in January 1960, to allow the Raiders to stock their team, as some of the other AFL teams had already signed some of Minneapolis' original draft choices. Crisis and success (1960–61) In November 1959, Minneapolis-Saint Paul owner Max Winter announced his intent to leave the AFL to accept a franchise offer from the NFL. In 1961, his team began play in the NFL as the Minnesota Vikings. Los Angeles Chargers owner Barron Hilton demanded that a replacement for Minnesota be placed in California, to reduce his team's operating costs and to create a rivalry. After a brief search, Oakland was chosen and an ownership group led by F. Wayne Valley and local real estate developer Chet Soda was formed. After initially being called the Oakland Señors, the rechristened Oakland Raiders officially joined the AFL on January 30, 1960. The AFL's first major success came when the Houston Oilers signed Billy Cannon, the All-American and 1959 Heisman Trophy winner from LSU. Cannon signed a $100,000 contract to play for the Oilers, despite having already signed a $50,000 contract with the NFL's Los Angeles Rams. The Oilers filed suit and claimed that Rams general manager Pete Rozelle had unduly manipulated Cannon. The court upheld the Houston contract, and with Cannon the Oilers appeared in the AFL's first three championship games (winning two). On June 9, 1960, the league signed a five-year television contract with ABC, which brought in revenues of approximately $2.125 million per year for the entire league. On June 17, the AFL filed an antitrust lawsuit against the NFL, which was dismissed in 1962 after a two-month trial. The AFL began regular-season play (a night game on Friday, September 9, 1960) with eight teams in the league – the Boston Patriots, Buffalo Bills, Dallas Texans, Denver Broncos, Houston Oilers, Los Angeles Chargers, New York Titans, and Oakland Raiders. Raiders' co-owner Wayne Valley dubbed the AFL ownership "The Foolish Club", a term Lamar Hunt subsequently used on team photographs he sent as Christmas gifts. The Oilers became the first-ever league champions by defeating the Chargers, 24–16, in the AFL Championship on January 1, 1961. Attendance for the 1960 season was respectable for a new league, but not nearly that of the NFL. In 1960, the NFL averaged attendance of more than 40,000 fans per game and more popular NFL teams in 1960 regularly saw attendance figures in excess of 50,000 per game, while CFL attendances averaged approximately 20,000 per game. By comparison, AFL attendance averaged about 16,500 per game and generally hovered between 10,000 and 20,000 per game. Professional football was still primarily a gate-driven business in 1960, so low attendance meant financial losses. The Raiders, with a league-worst average attendance of just 9,612, lost $500,000 in their first year and only survived after receiving a $400,000 loan from Bills owner Ralph Wilson. In an early sign of stability, however, the AFL did not lose any teams after its first year of operation. In fact, the only major change was the Chargers' move from Los Angeles to nearby San Diego (they would return to Los Angeles in 2017). On August 8, 1961, the AFL challenged the Canadian Football League to an exhibition game that would feature the Hamilton Tiger-Cats and the Buffalo Bills, which was attended by 24,376 spectators. Playing at Civic Stadium in Hamilton, Ontario, the Tiger-Cats defeated the Bills 38–21 playing a mix of AFL and CFL rules. Movement and instability (1962–63) While the Oilers found instant success in the AFL, other teams did not fare as well. The Oakland Raiders and New York Titans struggled on and off the field during their first few seasons in the league. Oakland's eight-man ownership group was reduced to just three in 1961, after heavy financial losses in their first season. Attendance for home games was poor, partly due to the team playing in the San Francisco Bay Area—which already had an established NFL team (the San Francisco 49ers)—but the product on the field was also to blame. After winning six games in their debut season, the Raiders won a total of three times in the 1961 and 1962 seasons. Oakland took part in a 1961 supplemental draft meant to boost the weaker teams in the league, but it did little good. They participated in another such draft in 1962. The Titans fared a little better on the field but had their own financial troubles. Attendance was so low for home games that team owner Harry Wismer had fans move to seats closer to the field to give the illusion of a fuller stadium on television. Eventually Wismer could no longer afford to meet his payroll, and on November 8, 1962, the AFL took over operations of the team. The Titans were sold to a five-person ownership group headed by Sonny Werblin on March 28, 1963, and in April the new owners changed the team's name to the New York Jets. The Raiders and Titans both finished last in their respective divisions in the 1962 season. The Texans and Oilers, winners of their divisions, faced each other for the 1962 AFL Championship on December 23. The Texans dethroned the two-time champion Oilers, 20–17, in a double-overtime contest that was, at the time, professional football's longest-ever game. In 1963, the Texans became the second AFL team to relocate. Lamar Hunt felt that despite winning the league championship in 1962, the Texans could not sufficiently profit in the same market as the Dallas Cowboys, which entered the NFL as an expansion franchise in 1960. After meetings with New Orleans, Atlanta, and Miami, Hunt announced on May 22 that the Texans' new home would be Kansas City, Missouri. Kansas City mayor Harold Roe Bartle (nicknamed "Chief") was instrumental in his city's success in attracting the team. Partly to honor Bartle, the franchise officially became the Kansas City Chiefs on May 26. The San Diego Chargers, under head coach Sid Gillman, won a decisive 51–10 victory over the Boston Patriots for the 1963 AFL Championship. Confident that his team was capable of beating that season's NFL champion Chicago Bears (he had the Chargers' rings inscribed with the phrase "World Champions"), Gillman approached NFL Commissioner Pete Rozelle and proposed a final championship game between the two teams. Rozelle declined the offer; however, the game would be instituted three seasons later. Watershed years (1964–65) A series of events throughout the next few years demonstrated the AFL's ability to achieve a greater level of equality with the NFL. On January 29, 1964, the AFL signed a lucrative $36 million television contract with NBC (beginning in the 1965 season), which gave the league money it needed to compete with the NFL for players. Pittsburgh Steelers owner Art Rooney was quoted as saying to NFL Commissioner Pete Rozelle after receiving the news of the AFL's new TV deal that, "They don't have to call us 'Mister' anymore". A single-game attendance record was set on November 8, 1964, when 61,929 fans packed Shea Stadium to watch the New York Jets and Buffalo Bills. The bidding war for players between the AFL and NFL escalated in 1965. The Chiefs drafted University of Kansas star Gale Sayers in the first round of the 1965 AFL draft (held November 28, 1964), while the Chicago Bears did the same in the NFL draft. Sayers eventually signed with the Bears. A similar situation occurred when the New York Jets and the NFL's St. Louis Cardinals both drafted University of Alabama quarterback Joe Namath. In what was viewed as a key victory for the AFL, Namath signed a $427,000 contract with the Jets on January 2, 1965 (the deal included a new car). It was the highest amount of money ever paid to a collegiate football player, and is cited as the strongest contributing factor to the eventual merger between the two leagues. After the 1963 season, the Newark Bears of the Atlantic Coast Football League expressed interest in joining the AFL; concerns over having to split the New York metro area with the still-uncertain Jets were a factor in the Bears' bid being rejected. In 1965, Milwaukee officials tried to lure an expansion team to play at Milwaukee County Stadium where the Green Bay Packers had played parts of their home schedule after an unsuccessful attempt to lure the Packers there full-time, but Packers head coach Vince Lombardi invoked the team's exclusive lease, and additionally, signed an extension to keep some home games in Milwaukee until 1976. In June 1965, the AFL awarded its first expansion team to Cox Broadcasting of Atlanta. The NFL quickly counteroffered insurance executive Rankin Smith a franchise, which he accepted; the Atlanta Falcons began play as an NFL franchise for the 1966 season. In March 1965, Joe Robbie had met with Commissioner Foss to inquire about an expansion franchise for Miami. On May 6, Robbie secured an agreement with Miami mayor Robert King High to bring a team to Miami. League expansion was approved at a meeting held on June 7, and on August 16 the AFL's ninth franchise was officially awarded to Robbie and entertainer Danny Thomas. The Miami Dolphins joined the league for a fee of $7.5 million and started play in the AFL's Eastern Division in 1966. The AFL also planned to add two more teams by 1967. Escalation and merger (1966–67) In 1966, the rivalry between the AFL and NFL reached an all-time peak. On April 7, Joe Foss resigned as AFL commissioner. His successor was Oakland Raiders head coach and general manager Al Davis, who had been instrumental in turning around the fortunes of that franchise. That following May, Wellington Mara, owner of the NFL's New York Giants, broke a "gentleman's agreement" against signing another league's players and lured kicker Pete Gogolak away from the AFL's Buffalo Bills. In response to the Gogolak signing and no longer content with trying to outbid the NFL for college talent, the AFL under Davis began to also recruit players already on NFL squads. Davis's strategy focused on quarterbacks in particular, and in two months he persuaded seven NFL quarterbacks to sign with the AFL. Although Davis's intention was to help the AFL win the bidding war, some AFL and NFL owners saw the escalation as detrimental to both leagues. Alarmed with the rate of spending in the league, Hilton Hotels forced Barron Hilton to relinquish his stake in the Chargers as a condition of maintaining his leadership role with the hotel chain. The same month Davis was named commissioner, several NFL owners, headed by Dallas Cowboys general manager Tex Schramm, secretly approached Lamar Hunt and other AFL owners and started negotiations with the AFL to merge. A series of secret meetings commenced in Dallas to discuss the concerns of both leagues over rapidly increasing player salaries, as well as the practice of player poaching. Hunt and Schramm completed the basic groundwork for a merger of the two leagues by the end of May, and on June 8, 1966, the merger was officially announced. Under the terms of the agreement, the two leagues would hold a common player draft. The agreement also called for a title game to be played between the champions of the respective leagues. The two leagues would be fully merged by 1970, NFL commissioner Pete Rozelle would remain as commissioner of the merged league, which would be named the NFL. Additional expansion teams would eventually be awarded by 1970 or soon thereafter to bring it to a 28-team league. (The additional expansion would not happen until 1976.) The AFL also agreed to pay indemnities of $18 million to the NFL over 20 years. In protest, Davis resigned as AFL commissioner on July 25 rather than remain until the completion of the merger, and Milt Woodard was named president of the AFL, with the "commissioner" title vacated because of Rozelle's expanded role. On January 15, 1967, the first-ever championship game between the two separate professional football leagues, the "AFL-NFL World Championship Game" (retroactively referred to as Super Bowl I), was played in Los Angeles. After a close first half, the NFL champion Green Bay Packers overwhelmed the AFL champion Kansas City Chiefs, 35–10. The loss reinforced for many the notion that the AFL was an inferior league. Packers head coach Vince Lombardi stated after the game, "I do not think they are as good as the top teams in the National Football League." The second AFL-NFL Championship (Super Bowl II) yielded a similar result. The Oakland Raiders—who had easily beaten the Houston Oilers to win their first AFL championship—were overmatched by the Packers, 33–14. The more experienced Packers capitalized on a number of Raiders miscues and never trailed. Green Bay defensive tackle Henry Jordan offered a compliment to Oakland and the AFL, when he said, "... the AFL is becoming much more sophisticated on offense. I think the league has always had good personnel, but the blocks were subtler and better conceived in this game." The AFL added its tenth and final team on May 24, 1967, when it awarded the league's second expansion franchise to an ownership group from Cincinnati, Ohio, headed by NFL legend Paul Brown. Although Brown had intended to join the NFL, he agreed to join the AFL when he learned that his team would be included in the NFL once the merger was completed. The league's last newest expansion team, the Cincinnati Bengals began play in the 1968 season, finishing last in the Western Division. Legitimacy and the end of an era (1968–1970) While many AFL players and observers believed their league was the equal of the NFL, their first two Super Bowl performances did nothing to prove it. However, on November 17, 1968, when NBC cut away from a game between the Jets and Raiders to air the children's movie Heidi, the ensuing uproar helped disprove the notion that fans still considered the AFL an inferior product. The perception of AFL inferiority forever changed on January 12, 1969, when the AFL Champion New York Jets shocked the heavily favored NFL Champion Baltimore Colts in Super Bowl III. The Colts, who entered the contest favored by as many as 18 points, had completed the 1968 NFL season with a 13–1 record, and won the NFL title with a convincing 34–0 win over the Cleveland Browns. Led by their stalwart defense—which allowed a record-low 144 points—the 1968 Colts were considered one of the best-ever NFL teams. By contrast, the Jets had allowed 280 points, the highest total for any division winner in the two leagues. They had also only narrowly beaten the favored Oakland Raiders 27–23 in the AFL championship game. Jets quarterback Joe Namath recalled that in the days leading up to the game, he grew increasingly angry when told New York had no chance to beat Baltimore. Three days before the game, a frustrated Namath responded to a heckler at the Touchdown Club in Miami by declaring, "We're going to win Sunday, I guarantee it!" Namath and the Jets made good on his guarantee as they held the Colts scoreless until late in the fourth quarter. The Jets won, 16–7, in what is considered one of the greatest upsets in American sports history. With the win, the AFL finally achieved parity with the NFL and legitimized the merger of the two leagues. That notion was reinforced one year later in Super Bowl IV, when the AFL champion Kansas City Chiefs upset the NFL champion Minnesota Vikings, 23–7, in the last championship game to be played between the two leagues. The Vikings, favored by 12½ points, were held to just 67 rushing yards. The last game in AFL history was the AFL All-Star Game, held in Houston's Astrodome on January 17, 1970. The Western All-Stars, led by Chargers quarterback John Hadl, defeated the Eastern All-Stars, 26–3. Buffalo rookie running back O. J. Simpson carried the ball for the last play in AFL history. Hadl was named the game's Most Valuable Player. The AFL ceased to exist as an unincorporated organization on February 1, 1970, when the NFL granted 10 new franchises and issued a new constitution. Prior to the start of the 1970 NFL season, the merged league was organized into two conferences of three divisions each. All ten AFL teams made up the bulk of the new American Football Conference. To avoid having an inequitable number of teams in each conference, the leagues voted to move three NFL teams to the AFC. Motivated by the prospect of an intrastate rivalry with the Bengals as well as by personal animosity toward Paul Brown, Cleveland Browns owner Art Modell quickly offered to include his team in the AFC. He helped persuade the Pittsburgh Steelers (the Browns' archrivals) and Baltimore Colts (who shared the Baltimore-Washington market with the Washington Redskins) to follow suit, and each team received US$3 million to make the switch. The remaining 13 NFL teams became part of the National Football Conference. Pro Football Hall of Fame receiver Charlie Joiner, who started his career with the Houston Oilers (1969), was the last AFL player active in professional football, retiring after the 1986 season, when he played for the San Diego Chargers. Legacy Overview The American Football League stands as the only professional football league to successfully compete against the NFL. When the two leagues merged in 1970, all ten AFL franchises and their statistics became part of the new NFL. Every other professional league that had competed against the NFL before the AFL–NFL merger had folded completely: the three previous leagues named "American Football League" and the All-America Football Conference. From an earlier AFL (1936–1937), only the Cleveland Rams (now the Los Angeles Rams) joined the NFL and are currently operating, as are the Cleveland Browns and the San Francisco 49ers from the AAFC. A third AAFC team, the Baltimore Colts (not related to the 1953–1983 Baltimore Colts or to the current Indianapolis Colts franchise), played only one year in the NFL, disbanding at the end of the 1950 season. The league resulting from the merger was a 26-team juggernaut (since expanded to 32) with television rights covering all of the Big Three television networks (and since the 1990s, the newer Fox network) and teams in close proximity to almost all of the top 40 metropolitan areas, a fact that has precluded any other competing league from gaining traction since the merger; failed attempts to mimic the AFL's success included the World Football League (1974–75), United States Football League (1983–85), the United Football League (2009–2012) and the AAF (2019), and two iterations of the XFL (2001 and 2020), in addition to the NFL-backed and created World League of American Football (1991-92). The AFL was also the most successful of numerous upstart leagues of the 1960s and 1970s that attempted to challenge a major professional league's dominance. All nine teams that were in the AFL at the time the merger was agreed upon were accepted into the league intact (as was the tenth team added between the time of the merger's agreement and finalization), and none of the AFL's teams have ever folded. For comparison, the World Hockey Association (1972–79) managed to have four of its six remaining teams merged into the National Hockey League, which actually caused the older league to contract a franchise, but WHA teams were forced to disperse the majority of their rosters and restart as expansion teams. The merged WHA teams were also not financially sound (in large part from the hefty expansion fees the NHL imposed on them), and three of the four were forced to relocate within 20 years. Like the WHA, The American Basketball Association (1967–76) also managed to have only four of its teams merged into the National Basketball Association, and the rest of the league was forced to fold. Both the WHA and ABA lost several teams to financial insolvency over the course of their existences. The Continental League, a proposed third league for Major League Baseball that was to begin play in 1961, never played a single game, largely because MLB responded to the proposal by expanding to four of that league's proposed cities. Historically, the only other professional sports league in the United States to exhibit a comparable level of franchise stability from its inception was the American League of Major League Baseball, which made its debut in the early 20th century. Rule changes The NFL adopted some of the innovations introduced by the AFL immediately and a few others in the years following the merger. One was including the names on player jerseys. The older league also adopted the practice of using the stadium scoreboard clocks to keep track of the official game time, instead of just having a stopwatch used by the referee. The AFL played a 14-game schedule for its entire existence, starting in 1960. The NFL, which had played a 12-game schedule since 1947, changed to a 14-game schedule in 1961, a year after the American Football League instituted it. The AFL also introduced the two-point conversion to professional football 34 years before the NFL instituted it in 1994 (college football had adopted the two-point conversion in the late 1950s). All of these innovations pioneered by the AFL, including its more exciting style of play and colorful uniforms, have essentially made today's professional football more like the AFL than like the old-line NFL. The AFL's challenge to the NFL also laid the groundwork for the Super Bowl, which has become the standard for championship contests in the United States of America. Television The NFL also adapted how the AFL used the growing power of televised football games, which were bolstered with the help of major network contracts (first with ABC and later with NBC). With that first contract with ABC, the AFL adopted the first-ever cooperative television plan for professional football, in which the proceeds were divided equally among member clubs. It featured many outstanding games, such as the classic 1962 double-overtime American Football League championship game between the Dallas Texans and the defending champion Houston Oilers. At the time it was the longest professional football championship game ever played. The AFL also appealed to fans by offering a flashier style of play (just like the ABA in basketball), compared to the more conservative game of the NFL. Long passes ("bombs") were commonplace in AFL offenses, led by such talented quarterbacks as John Hadl, Daryle Lamonica and Len Dawson. Despite having a national television contract, the AFL often found itself trying to gain a foothold, only to come up against roadblocks. For example, CBS-TV, which broadcast NFL games, ignored and did not report scores from the innovative AFL, on orders from the NFL. It was only after the merger agreement was announced that CBS began to give out AFL scores. Expanding and reintroducing the sport to more cities The AFL took advantage of the burgeoning popularity of football by locating teams in major cities that lacked NFL franchises. Hunt's vision not only brought a new professional football league to California and New York, but introduced the sport to Colorado, restored it to Texas and later to fast-growing Florida, as well as bringing it to Greater Boston for the first time in 12 years. Buffalo, having lost its original NFL franchise in 1929 and turned down by the NFL at least twice (1940 and 1950) for a replacement, returned to the NFL with the merger. The return of football to Kansas City was the first time that city had seen professional football since the NFL's Kansas City Blues of the 1920s; the arrival of the Chiefs, and the contemporary arrival of the St. Louis Football Cardinals, brought professional football back to Missouri for the first time since the temporary St. Louis Gunners of 1934. St. Louis would later regain an NFL franchise in 1995 with the relocation of the LA Rams to the city. The Rams moved back in 2016. In the case of the Dallas Cowboys, the NFL had long sought to return to the Dallas area after the Dallas Texans folded in 1952, but was originally met with strong opposition by Washington Redskins owner George Preston Marshall, who had enjoyed a monopoly as the only NFL team to represent the American South. Marshall later changed his position after future-Cowboys owner Clint Murchison bought the rights to Washington's fight song "Hail to the Redskins" and threatened to prevent Marshall from playing it at games. By then, the NFL wanted to quickly award the new Dallas franchise to Murchison so the team could immediately begin play and compete with the AFL's Texans. As a result, the Cowboys played its inaugural season in 1960 without the benefit of the NFL draft. The Texans eventually ceded Dallas to the Cowboys and became the Kansas City Chiefs. As part of the merger agreement, additional expansion teams would be awarded by 1970 or soon thereafter to bring the league to 28 franchises; this requirement was fulfilled when the Seattle Seahawks and the Tampa Bay Buccaneers began play in 1976. In addition, had it not been for the existence of the Oilers from 1960 to 1996, the Houston Texans also would likely not exist today; the 2002 expansion team restored professional football in Houston after the original charter AFL member Oilers relocated to become the Tennessee Titans. Kevin Sherrington of The Dallas Morning News has argued that the presence of AFL and the subsequent merger radically altered the fortunes of the Pittsburgh Steelers, saving the team "from stinking". Before the merger, the Steelers had long been one of the NFL's worst teams. Constantly lacking the money to build a quality team, the Steelers had only posted eight winning seasons, and just one playoff appearance, since their first year of existence in 1933 until the end of the 1969 season. They also finished with a 1–13 record in 1969, tied with the Chicago Bears for the worst record in the NFL. The $3 million indemnity that the Steelers received for joining the AFC with the rest of the former AFL teams after the merger helped them rebuild into a contender, drafting eventual-Pro Football Hall of Famers like Terry Bradshaw and Joe Greene, and ultimately winning four Super Bowls in the 1970s. Since the 1970 merger, the Steelers have the NFL's highest winning percentage, the most total victories, the most trips to either conference championship game, are tied for the second most trips to the Super Bowl (tied with the Dallas Cowboys and Denver Broncos, trailing only the New England Patriots), and have won six Super Bowl championships, tied with the Patriots for the most in NFL history. Effects on players Perhaps the greatest social legacy of the AFL was the domino effect of its policy of being more liberal than the entrenched NFL in offering opportunity for black players. While the NFL was still emerging from thirty years of segregation influenced by Washington Redskins' owner George Preston Marshall, the AFL actively recruited from small and predominantly black colleges. The AFL's color-blindness led not only to the explosion of black talent on the field, but to the eventual entry of blacks into scouting, coordinating, and ultimately head coaching positions, long after the league merged itself out of existence. The AFL's free agents came from several sources. Some were players who could not find success playing in the NFL, while another source was the then newly-formed Canadian Football League. In the late 1950s, many players released by the NFL, or un-drafted and unsigned out of college by the NFL, went North to try their luck with the CFL (which formed in 1958), and later returned to the states to play in the AFL. In the league's first years, players such as Oilers' George Blanda, Chargers/Bills' Jack Kemp, Texans' Len Dawson, the NY Titans' Don Maynard, Raiders/Patriots/Jets' Babe Parilli, Pats' Bob Dee proved to be AFL standouts. Other players such as the Broncos' Frank Tripucka, the Pats' Gino Cappelletti, the Bills' Cookie Gilchrist and the Chargers' Tobin Rote, Sam DeLuca and Dave Kocourek also made their mark to give the fledgling league badly needed credibility. Rounding out this mix of potential talent were the true "free agents", the walk-ons and the "wanna-be's", who tried out in droves for the chance to play professional American football. After the AFL–NFL merger agreement in 1966, and after the AFL's Jets defeated an extremely strong Baltimore Colts team, a popular misconception fostered by the NFL and spread by media reports was that the AFL defeated the NFL because of the Common Draft instituted in 1967. This apparently was meant to assert that the AFL could not achieve parity as long as it had to compete with the NFL in the draft. But the 1968 Jets had less than a handful of "common draftees". Their stars were honed in the AFL, many of them since the Titans days. Players who chose the AFL to develop their talent included Lance Alworth and Ron Mix of the Chargers, who had also been drafted by the NFL's San Francisco 49ers and Baltimore Colts respectively. Both eventually were elected to the Pro Football Hall of Fame after earning recognition during their careers as being among the best at their positions. Among specific teams, the 1964 Buffalo Bills stood out by holding their opponents to a pro football record 913 yards rushing on 300 attempts, while also recording fifty quarterback sacks in a 14-game schedule. In 2009, a five-part series, Full Color Football: The History of the American Football League, on the Showtime Network, refuted many of the long-held misconceptions about the AFL. In it, Abner Haynes tells of how his father forbade him to accept being drafted by the NFL, after drunken scouts from that league had visited the Haynes home; the NFL Cowboys' Tex Schramm is quoted as saying that if his team had ever agreed to play the AFL's Dallas Texans, they would very likely have lost; George Blanda makes a case for more AFL players being inducted to the Pro Football Hall of Fame by pointing out that Hall of Famer Willie Brown was cut by the Houston Oilers because he couldn't cover Oilers flanker Charlie Hennigan in practice. Later, when Brown was with the Broncos, Hennigan needed nine catches in one game against the Broncos to break Lionel Taylor's Professional Football record of 100 catches in one season. Hennigan caught the nine passes and broke the record, even though he was covered by Brown. Influence on professional football coaching The AFL also spawned coaches whose style and techniques have profoundly affected the play of professional football to this day. In addition to AFL greats like Hank Stram, Lou Saban, Sid Gillman and Al Davis were eventual hall of fame coaches such as Bill Walsh, a protégé of Davis with the AFL Oakland Raiders for one season; and Chuck Noll, who worked for Gillman and the AFL LA/San Diego Chargers from 1960 through 1965. Others include Buddy Ryan (AFL's New York Jets), Chuck Knox (Jets), Walt Michaels (Jets), and John Madden (AFL's Oakland Raiders). Additionally, many prominent coaches began their pro football careers as players in the AFL, including Sam Wyche (Cincinnati Bengals), Marty Schottenheimer (Buffalo Bills), Wayne Fontes (Jets), and two-time Super Bowl winner Tom Flores (Oakland Raiders). Flores also has a Super Bowl ring as a player (1969 Kansas City Chiefs). AFL 50th Anniversary Celebration As the influence of the AFL continues through the present, the 50th anniversary of its launch was celebrated during 2009. The season-long celebration began in August with the 2009 Pro Football Hall of Fame Game in Canton, Ohio, between two AFC teams (as opposed to the AFC-vs-NFC format the game first adopted in 1971). The opponents were two of the original AFL franchises, the Buffalo Bills and Tennessee Titans (the former Houston Oilers). Bills' owner Ralph C. Wilson Jr. (a 2009 Hall of Fame inductee) and Titans' owner Bud Adams were the only surviving members of the Foolish Club at the time (both are now deceased; Wilson's estate sold the team in 2014), the eight original owners of AFL franchises. (As of the season, the Titans and Chiefs are still owned by descendants of the original eight owners.) The Hall of Fame Game was the first of several "Legacy Weekends", during which each of the "original eight" AFL teams sported uniforms from their AFL era. Each of the 8 teams took part in at least two such "legacy" games. On-field officials also wore red-and-white-striped AFL uniforms during these games. In the fall of 2009, the Showtime pay-cable network premiered Full Color Football: The History of the American Football League, a 5-part documentary series produced by NFL Films that features vintage game film and interviews as well as more recent interviews with those associated with the AFL. The NFL sanctioned a variety of "Legacy" gear to celebrate the AFL anniversary, such as "throwback" jerseys, T-shirts, signs, pennants and banners, including items with the logos and colors of the Dallas Texans, Houston Oilers, and New York Titans, the three of the Original Eight AFL teams which have changed names or venues. A December 5, 2009, story by Ken Belson in The New York Times quotes league officials as stating that AFL "Legacy" gear made up twenty to thirty percent of the league's annual $3 billion merchandise income. Fan favorites were the Denver Broncos' vertically striped socks, which could not be re-stocked quickly enough. AFL franchises Today, two of the NFL's eight divisions are composed entirely of former AFL teams, the AFC West (Broncos, Chargers, Chiefs, and Raiders) and the AFC East (Bills, Dolphins, Jets, and Patriots). Additionally, the Bengals now play in the AFC North and the Tennessee Titans (formerly the Oilers) play in the AFC South. Former stadiums: Oakland–Alameda County Coliseum, Los Angeles Memorial Coliseum, Fenway Park, Nickerson Field, Alumni Stadium, Nippert Stadium, the Cotton Bowl, Balboa Stadium and Kezar Stadium), still standing but currently vacant (Houston Astrodome), or demolished. AFL playoffs From 1960 to 1968, the AFL determined its champion via a single-elimination playoff game between the winners of its two divisions. The home teams alternated each year by division, so in 1968 the Jets hosted the Raiders, even though Oakland had a better record (this was changed in 1969). In 1963, the Buffalo Bills and Boston Patriots finished tied with identical records of 7–6–1 in the AFL East Division. There was no tie-breaker protocol in place, so a one-game playoff was held in War Memorial Stadium in December. The visiting Patriots defeated the host Bills 26–8. The Patriots traveled to San Diego as the Chargers completed a three-game season sweep over the weary Patriots with a 51–10 victory. A similar situation occurred in the 1968 season, when the Oakland Raiders and the Kansas City Chiefs finished the regular season tied with identical records of 12–2 in the AFL West Division. The Raiders beat the Chiefs 41–6 in a division playoff to qualify for the AFL Championship Game. In 1969, the final year of the independent AFL, Professional Football's first "wild card" playoffs were conducted. A four-team playoff was held, with the second-place teams in each division playing the winner of the other division. The Chiefs upset the Raiders in Oakland 17–7 in the league's Championship, the final AFL game played. The Kansas City Chiefs were the first Super Bowl champion to win two road playoff games and the first wildcard team to win the Super Bowl, although the term "wildcard" was coined by the media, and not used officially until several years later. AFL Championship Games AFL All-Star games The AFL did not play an All-Star game after its first season in 1960, but did stage All-Star games for the 1961 through 1969 seasons. All-Star teams from the Eastern and Western divisions played each other after every season except 1965. That season, the league champion Buffalo Bills played all-stars from the other teams. After the 1964 season, the AFL All-Star game had been scheduled for early 1965 in New Orleans' Tulane Stadium. After numerous black players were refused service by a number of area hotels and businesses, black and white players alike called for a boycott. Led by Bills players such as Cookie Gilchrist, the players successfully lobbied to have the game moved to Houston's Jeppesen Stadium. All-Time AFL Team As chosen by 1969 AFL Hall of Fame Selection committee members: AFL records The following is a sample of some records set during the existence of the league. The NFL considers AFL statistics and records equivalent to its own. Yards passing, game – 464, George Blanda (Oilers, October 29, 1961) Yards passing, season – 4,007, Joe Namath (Jets, 1967) Yards passing, career – 21,130, Jack Kemp (Chargers, Bills) Yards rushing, game – 243, Cookie Gilchrist (Bills, December 8, 1963) Yards rushing, season – 1,458, Jim Nance (Patriots, 1966) Yards rushing, career – 5,101, Clem Daniels (Texans, Raiders) Receptions, season – 101, Charlie Hennigan (Oilers, 1964) Receptions, career – 567, Lionel Taylor (Broncos) Points scored, season – 155, Gino Cappelletti (Patriots, 1964) Points scored, career – 1,100, Gino Cappelletti (Patriots) Players, coaches, and contributors List of American Football League players American Football League Most Valuable Players American Football League Rookies of the Year American Football League Draft American Football League Officials Commissioners/Presidents of the American Football League Joe Foss, commissioner (November 30, 1959 – April 7, 1966) Al Davis, commissioner (April 8, 1966 – July 25, 1966) Milt Woodard, president (July 25, 1966 – March 12, 1970) See also American Football League Draft American Football League win–loss records American Football League seasons American Football League playoffs American Football League Most Valuable Players American Football League Rookies of the Year American Football League Officials AFL–NFL merger List of leagues of American football American Basketball Association World Hockey Association Footnotes References History: The AFL – Pro Football Hall of Fame (link). External links RemembertheAFL.com Website afl-football.50webs.com American Football League week-by-week box scores, 1960–1969 The Summer of the Little Super Bowls PFRA article about the 1926 seasons of both the NFL and AFL PFRA article about the 1930s and 40s AFL Pro Football Hall of Fame American Football League Legacy Game Official Titans website story on the AFL's 50th Anniversary Celebration Schedule of American Football League Legacy Games ESPN.com article on AFL Legacy Games The New York Times article on AFL "Legacy" gear Defunct professional sports leagues in the United States Sports leagues established in 1960 1970 disestablishments in the United States 1960 establishments in the United States Defunct national American football leagues Sports leagues disestablished in 1970 1970 mergers and acquisitions
2369
https://en.wikipedia.org/wiki/Aston%20Martin
Aston Martin
Aston Martin Lagonda Global Holdings PLC is a British manufacturer of luxury sports cars and grand tourers. Its predecessor was founded in 1913 by Lionel Martin and Robert Bamford. Steered from 1947 by David Brown, it became associated with expensive grand touring cars in the 1950s and 1960s, and with the fictional character James Bond following his use of a DB5 model in the 1964 film Goldfinger. Their sports cars are regarded as a British cultural icon. Aston Martin has held a Royal Warrant as purveyor of motorcars to Charles III since 1982, and has over 160 car dealerships in 53 countries, making it a global automobile brand. The company is traded on the London Stock Exchange and is a constituent of the FTSE 250 Index. In 2003 it received the Queen's Award for Enterprise for outstanding contribution to international trade. The company has survived seven bankruptcies throughout its history. The headquarters and main production of its sports cars and grand tourers are in a facility in Gaydon, Warwickshire, England, on the former site of RAF Gaydon, adjacent to the Jaguar Land Rover Gaydon Centre. The old facility in Newport Pagnell, Buckinghamshire is the present home of the Aston Martin Works classic car department, which focuses on heritage sales, service, spares and restoration operations. The factory in St Athan, Wales features three converted 'super-hangars' from MOD St Athan, and serves as the production site of Aston Martin's first-ever SUV, the DBX. Aston Martin plans on building electric vehicles on both its Gaydon and St Athan factories by 2025. Aston Martin has been involved in motorsport at various points in its history, mainly in sports car racing, and also in Formula One. The Aston Martin brand is increasingly being used, mostly through licensing, on other products including a submarine, real estate development, and aircraft. History Founding Aston Martin was founded in 1913 by Lionel Martin and Robert Bamford. The two had joined forces as Bamford & Martin the previous year to sell cars made by Singer from premises in Callow Street, London where they also serviced GWK and Calthorpe vehicles. Martin raced specials at Aston Hill near Aston Clinton, and the pair decided to make their own vehicles. The first car to be named Aston Martin was created by Martin by fitting a four-cylinder Coventry-Simplex engine to the chassis of a 1908 Isotta Fraschini. They acquired premises at Henniker Mews in Kensington and produced their first car in March 1915. Production could not start because of the outbreak of the First World War, when Martin joined the Admiralty and Bamford joined the Army Service Corps. 1918–1939: Interwar years After the war they found new premises at Abingdon Road, Kensington and designed a new car. Bamford left in 1920 and Bamford & Martin was revitalised with funding from Count Louis Zborowski. In 1922, Bamford & Martin produced cars to compete in the French Grand Prix, which went on to set world speed and endurance records at Brooklands. Three works Team Cars with 16-valve twin cam engines were built for racing and record-breaking: chassis number 1914, later developed as the Green Pea; chassis number 1915, the Razor Blade record car; and chassis number 1916, later developed as the Halford Special. Approximately 55 cars were built for sale in two configurations; long chassis and short chassis. Bamford & Martin went bankrupt in 1924 and was bought by Dorothea, Lady Charnwood, who put her son John Benson on the board. Bamford & Martin got into financial difficulty again in 1925 and Martin was forced to sell the company (Bamford had already left). Later that year, Bill Renwick, Augustus (Bert) Bertelli and investors including Lady Charnwood took control of the business. They renamed it Aston Martin Motors and moved it to the former Whitehead Aircraft Limited Hanworth works in Feltham. Renwick and Bertelli had been in partnership some years and had developed an overhead-cam four-cylinder engine using Renwick's patented combustion chamber design, which they had tested in an Enfield-Allday chassis. The only "Renwick and Bertelli" motor car made, it was known as "Buzzbox" and still survives. The pair had planned to sell their engine to motor manufacturers, but having heard that Aston Martin was no longer in production realised they could capitalise on its reputation to jump-start the production of a completely new car. Between 1926 and 1937 Bertelli was both technical director and designer of all new Aston Martins, since known as "Bertelli cars". They included the 1½-litre "T-type", "International", "Le Mans", "MKII" and its racing derivative, the "Ulster", and the 2-litre 15/98 and its racing derivative, the "Speed Model". Most were open two-seater sports cars bodied by Bert Bertelli's brother Enrico (Harry), with a small number of long-chassis four-seater tourers, dropheads and saloons also produced. Bertelli was a competent driver keen to race his cars, one of few owner/manufacturer/drivers. The "LM" team cars were very successful in national and international motor racing including at Le Mans. Financial problems reappeared in 1932. Aston Martin was rescued for a year by Lance Prideaux Brune before passing it on to Sir Arthur Sutherland. In 1936, Aston Martin decided to concentrate on road cars, producing just 700 until World War II halted work. Production shifted to aircraft components during the war. 1947–1972: David Brown In 1947, old-established (1860) privately owned Huddersfield gear and machine tools manufacturer David Brown Limited bought Aston Martin, putting it under control of its Tractor Group. David Brown became Aston Martin's latest saviour. He also acquired Lagonda, without its factory, for its 2.6-litre W. O. Bentley-designed engine. Lagonda moved operations to Newport Pagnell and shared engines, resources and workshops. Aston Martin began to build the classic "DB" series of cars. In April 1950, they announced planned production of their Le Mans prototype to be called the DB2, followed by the DB2/4 in 1953, the DB2/4 MkII in 1955, the DB Mark III in 1957 and the Italian-styled 3.7 L DB4 in 1958. While these models helped Aston Martin establish a good racing pedigree, the DB4 stood out and yielded the famous DB5 in 1963. Aston stayed true to its grand touring style with the DB6 (1965–70), and DBS (1967–1972). The six-cylinder engines of these cars from 1954 up to 1965 were designed by Tadek Marek. 1972–1975: William Willson Aston Martin was often financially troubled. In 1972, David Brown paid off all its debts, said to be £5 million or more, and handed it for £101 to Company Developments, a Birmingham-based investment bank consortium chaired by accountant William Willson. More detail on this period may be read at Willson's biography. The worldwide recession, lack of working capital and the difficulties of developing an engine to meet California's exhaust emission requirements – it stopped the company's US sales – again pulled Aston Martin into receivership at the end of 1974. The company had employed 460 workers when the manufacturing plant closed. 1975–1981: Sprague and Curtis The receiver sold the business in April 1975 for £1.05 million to North American businessman Peter Sprague of National Semiconductor, Toronto hotelier George Minden, and Jeremy Turner, a London businessman, who insisted to reporters that Aston Martin remained a British controlled business. Sprague later claimed he had fallen in love with the factory, not the cars, the workforce's craftsmanship dedication and intelligence. At this point, he and Minden had brought in investor Alan Curtis, a British office property developer, together with George Flather, a retired Sheffield steel magnate. Six months later, in September 1975, the factory – shut down the previous December – re-opened under its new owner as Aston Martin Lagonda Limited with 100 employees, and planned to lift staff to 250 by the end of 1975. In January 1976, AML revealed that it now held orders for 150 cars for the US, 100 for other markets and another 80 from a Japanese importing agency. At the Geneva Motor Show, Fred Hartley, managing director and sales director for 13 years before that, announced he had resigned over "differences in marketing policy". The new owners pushed Aston Martin into modernising its line, introducing the V8 Vantage in 1977, the convertible Volante in 1978, and the one-off Bulldog styled by William Towns in 1980. Towns also styled the futuristic new Lagonda saloon, based on the V8 model. Curtis, who had a 42% stake in Aston Martin, also brought about a change in direction from the usual customers who were Aston Martin fans, to successful young married businessmen. Prices had been increased by 25%. There was speculation that AML was about to buy Italian automobile manufacturer Lamborghini. At the end of the 1970s, there was widespread debate about running MG into the Aston Martin consortium. 85 Conservative MPs formed themselves into a pressure group to get British Leyland to release their grip and hand it over. CH Industrials plc (car components) bought a 10% share in AML. But in July 1980, blaming a recession, AML cut back their workforce of 450 by more than 20%, making those people redundant. 1981–1987: Victor Gauntlett In January 1981, there having been no satisfactory revival partners, Alan Curtis and Peter Sprague announced they had never intended to maintain a long-term financial stake in Aston Martin Lagonda and it was to be sold to Pace Petroleum's Victor Gauntlett. Sprague and Curtis pointed out that under their ownership AML finances had improved to where an offer for MG might have been feasible. Gauntlett bought a 12.5% stake in Aston Martin for £500,000 via Pace Petroleum in 1980, with Tim Hearley of CH Industrials taking a similar share. Pace and CHI took over as joint 50/50 owners at the beginning of 1981, with Gauntlett as executive chairman. Gauntlett also led the sales team, and after some development and publicity when the Lagonda became the world's fastest four-seater production car, was able to sell the car in Oman, Kuwait, and Qatar. In 1982, Aston Martin was granted a Royal Warrant of Appointment by the Prince of Wales. Understanding that it would take some time to develop new Aston Martin products, they created an engineering service subsidiary to develop automotive products for other companies. It was decided to use a trade name of Salmons & Son, their in-house coachbuilder, Tickford, which Aston Martin had bought in 1955. Tickford's name had been long associated with expensive high-quality carriages and cars along with their folding roofs. New products included a Tickford Austin Metro, a Tickford Ford Capri and even Tickford train interiors, particularly on the Jaguar XJS. Pace continued sponsoring racing events, and now sponsored all Aston Martin Owners Club events, taking a Tickford-engined Nimrod Group C car owned by AMOC President Viscount Downe, which came third in the Manufacturers Championship in both 1982 and 1983. It also finished seventh in the 1982 24 Hours of Le Mans race. However, sales of production cars were now at an all-time low of 30 cars produced in 1982. As trading became tighter in the petroleum market, and Aston Martin was requiring more time and money, Gauntlett agreed to sell Hays/Pace to the Kuwait Investment Office in September 1983. As Aston Martin required greater investment, he also agreed to sell his share holding to American importer and Greek shipping tycoon Peter Livanos, who invested via his joint venture with Nick and John Papanicolaou, ALL Inc. Gauntlett remained chairman of AML, 55% of the stake was owned by ALL, with Tickford a 50/50 venture between ALL and CHI. The uneasy relationship was ended when ALL exercised options to buy a larger share in AML; CHI's residual shares were exchanged for CHI's complete ownership of Tickford, which retained the development of existing Aston Martin projects. In 1984, Papanicolaou's Titan shipping business was in trouble so Livanos's father George bought out the Papanicolaou's shares in ALL, while Gauntlett again became a shareholder with a 25% holding in AML. The deal valued Aston Martin/AML at £2 million, the year it built its 10,000th car. Although as a result Aston Martin had to make 60 members of the workforce redundant, Gauntlett bought a stake in Italian styling house Zagato, and resurrected its collaboration with Aston Martin. In 1986, Gauntlett negotiated the return of the fictional British secret agent James Bond to Aston Martin. Cubby Broccoli had chosen to recast the character using actor Timothy Dalton, in an attempt to re-root the Bond-brand back to a more Sean Connery-like feel. Gauntlett supplied his personal pre-production Vantage for use in the filming of The Living Daylights, and sold a Volante to Broccoli for use at his home in America. Gauntlett turned down the role of a KGB colonel in the film, however: "I would have loved to have done it but really could not afford the time." 1987–2007: Ford Motor Company As Aston Martin needed funds to survive in the long term, Ford bought a 75% stake in the company in 1987, and bought the rest later. In May of that year, Victor Gauntlett and Prince Michael of Kent were staying at the home of Contessa Maggi, the wife of the founder of the original Mille Miglia, while watching the revival event. Another house guest was Walter Hayes, vice-president of Ford of Europe. Despite problems over the previous acquisition of AC Cars, Hayes saw the potential of the brand and the discussion resulted in Ford taking a share holding in September 1987. In 1988, having produced some 5,000 cars in 20 years, a revived economy and successful sales of limited edition Vantage, and 52 Volante Zagato coupés at £86,000 each; Aston Martin finally retired the ancient V8 and introduced the Virage range. Although Gauntlett was contractually to stay as chairman for two years, his racing interests took the company back into sports car racing in 1989 with limited European success. However, with engine rule changes for the 1990 season and the launch of the new Volante model, Ford provided the limited supply of Cosworth engines to the Jaguar cars racing team. As the entry-level DB7 would require a large engineering input, Ford agreed to take full control of Aston Martin, and Gauntlett handed over Aston Martin's chairmanship to Hayes in 1991. In 1992, the high-performance variant of the Virage called the Vantage was announced, and the following year Aston Martin renewed the DB range by announcing the DB7. By 1993, Ford had fully acquired the company after having built a stake in 1987. Ford placed Aston Martin in the Premier Automotive Group, invested in new manufacturing and ramped up production. In 1994, Ford opened a new factory at Banbury Road in Bloxham to manufacture the DB7. In 1995, Aston Martin produced a record 700 cars. Until the Ford era, cars had been produced by hand coachbuilding craft methods, such as the English wheel. During the mid 1990s, the Special Projects Group, a secretive unit with Works Service at Newport Pagnell, created an array of special coach-built vehicles for the Brunei royal family. In 1998, the 2,000th DB7 was built, and in 2002, the 6,000th, exceeding production of all of the previous DB series models. The DB7 range was revamped by the addition of more powerful V12 Vantage models in 1999, and in 2001, Aston Martin introduced the V12-engined flagship model called the Vanquish which succeeded the aging Virage (now called the V8 Coupé). At the North American International Auto Show in Detroit, Michigan in 2003, Aston Martin introduced the V8 Vantage concept car. Expected to have few changes before its introduction in 2005, the Vantage brought back the classic V8 engine to allow Aston Martin to compete in a larger market. 2003 also saw the opening of the Gaydon factory, the first purpose-built factory in Aston Martin's history. The facility is situated on a site of a former RAF V Bomber airbase, with an front building for offices, meeting rooms and customer reception, and a production building. Also introduced in 2003 was the DB9 coupé, which replaced the ten-year-old DB7. A convertible version of the DB9, the DB9 Volante, was introduced at the 2004 Detroit auto show. In October 2004, Aston Martin set up the dedicated Aston Martin Engine Plant (AMEP) within the Ford Germany Niehl, Cologne plant. With the capacity to produce up to 5,000 engines a year by 100 specially trained personnel, like traditional Aston Martin engine production from Newport Pagnell, assembly of each unit was entrusted to a single technician from a pool of 30, with V8 and V12 variants assembled in under 20 hours. By bringing engine production back to within Aston Martin, the promise was that Aston Martin would be able to produce small runs of higher performance variants' engines. This expanded engine capacity allowed the entry-level V8 Vantage sports car to enter production at the Gaydon factory in 2006, joining the DB9 and DB9 Volante. In December 2003, Aston Martin announced it would return to motor racing in 2005. A new division was created, called Aston Martin Racing, which became responsible, together with Prodrive, for the design, development, and management of the DBR9 program. The DBR9 competes in the GT class in sports car races, including the world-famous 24 Hours of Le Mans. In 2006, an internal audit led Ford to consider divesting itself of parts of its Premier Automotive Group. After suggestions of selling Jaguar Cars, Land Rover, or Volvo Cars were weighed, Ford announced in August 2006 it had engaged UBS AG to sell all or part of Aston Martin at auction. 2007–2018: Private Limited Company On 12 March 2007, a consortium led by Prodrive chairman David Richards purchased Aston Martin for £475 million (US$848 million). The group included American investment banker John Sinders and two Kuwaiti companies namely Investment Dar and Adeem Investment. Prodrive had no financial involvement in the deal. Ford kept a stake in Aston Martin valued at £40 million (US$70 million). To demonstrate the V8 Vantage's durability across hazardous terrain and promote the car in China, the first east–west crossing of the Asian Highway was undertaken between June and August 2007. A pair of Britons drove from Tokyo to Istanbul before joining the European motorway network for another to London. The promotion was so successful Aston Martin opened dealerships in Shanghai and Beijing within three months. On 19 July 2007, the Newport Pagnell plant rolled out the last of nearly 13,000 cars made there since 1955, a Vanquish S. The Tickford Street facility was converted and became the home of the Aston Martin Works classic car department which focuses on heritage sales, service, spares and restoration operations. UK production was subsequently concentrated on the facility in Gaydon on the former RAF V Bomber airbase. In March 2008, Aston Martin announced a partnership with Magna Steyr to outsource manufacture of over 2,000 cars annually to Graz, Austria, reassuringly stating: "The continuing growth and success of Aston Martin is based upon Gaydon as the focal point and heart of the business, with the design and engineering of all Aston Martin products continuing to be carried out there." More dealers in Europe and the new pair in China brought the total to 120 in 28 countries. On 1 September 2008, Aston Martin announced the revival of the Lagonda marque, proposing a concept car to be shown in 2009 to coincide with the brand's 100th anniversary. The first production cars were slated for production in 2012. In December 2008, Aston Martin announced it would cut its workforce from 1,850 to 1,250 due to the economic recession. The first four-door Rapide grand tourers rolled out of the Magna Steyr factory in Graz, Austria in 2010. The contract manufacturer provides dedicated facilities to ensure compliance with the exacting standards of Aston Martin and other marques, including Mercedes-Benz. Then CEO of the company, Ulrich Bez had publicly speculated about outsourcing all of Aston Martin's operations with the exception of marketing. In September 2011, it was announced that production of the Rapide would be returned to Gaydon in the second half of 2012, restoring all of the company's automobile manufacture there. Italian private equity fund Investindustrial signed a deal on 6 December 2012 to buy a 37.5% stake in Aston Martin, investing £150 million as a capital increase. This was confirmed by Aston Martin in a press release on 7 December 2012. David Richards left Aston Martin in 2013, returning to concentrate on Prodrive. In April 2013, it was reported that Bez would be leaving his role as the chief executive officer to take up a more ambassadorial position. On 2 September 2014, Aston Martin announced it had appointed the Nissan executive Andy Palmer as the new CEO with Bez retaining a position as non-executive chairman. As sales had been declining from 2015, Aston Martin sought new customers (particularly wealthy female buyers) with introducing concept cars like the DBX SUV along with track focused cars like the Vulcan. According to Palmer, the troubles started when sales of the DB9 failed to generate sufficient fund to develop next-generation models which led to a downward spiral of declining sales and profitability. Palmer outlined that the company plans to develop two new platforms, add a crossover, refresh its supercar lineup and leverage its technology alliance with Daimler as part of its six-year plan to make the 100-year-old British brand consistently profitable. He stated, "In the first century we went bankrupt seven times. The second century is about making sure that is not the case." In preparation for its next-generation of sports cars, the company invested £20 million ($33.4 million) to expand its manufacturing plant in Gaydon. The expansion at the Gaydon plant includes a new chassis and pilot build facility, as well as an extension of the parts and logistics storage area, and new offices. In total, Aston Martin will add approximately to the plant. In 2014, Aston Martin suffered a pre-tax loss of £72 million, almost triple of the amount of 2013 selling 3,500 cars during the year, well below the 7,300 cars sold in 2007 and 4,200 sold in 2013 respectively. In March 2014, Aston Martin issued "payment in kind" notes of US$165 million, at 10.25% interest, in addition to the £304 million of senior secured notes at 9.25% issued in 2011. Aston Martin also had to secure an additional investment of £200 million from its shareholders to fund development of new models. It was reported that Aston Martin's pre-tax losses for 2016 increased by 27% to £162.8 million, the sixth year it continued to suffer a loss. In 2016, the company selected a site in St Athan, South Wales for its new factory. The Welsh facility was unanimously chosen by Aston's board despite fierce competition from other locations as far afield as the Americas, Eastern Europe, the Middle East, Europe, as well as two other sites in the UK, believed to be Bridgend and Birmingham. The facility featured three existing ‘super-hangars’ of MOD St Athan. Construction work of converting the hangars commenced in April 2017. Aston Martin returned to profit in 2017 after selling over 5,000 cars. The company made a pre-tax profit of £87 million compared with a £163 million loss in 2016. 2017 also marked the return of production of the Newport Pagnell facility ten years after it originally ceased. 2013–present: Partnership with Mercedes-Benz Group In December 2013, Aston Martin signed a deal with Mercedes-Benz Group (at the time known as Daimler) to supply the next generation of Aston Martin cars with Mercedes-AMG engines. Mercedes-AMG also was to supply Aston Martin with electrical systems. This technical partnership was intended to support Aston Martin's launch of a new generation of models that would incorporate new technology and engines. In exchange, Mercedes will get as much as 5% equity in Aston Martin and a non-voting seat on its board. The first model to sport the Mercedes-Benz technology was the DB11, announced at the 86th Geneva Motor Show in March 2016. It featured Mercedes-Benz electronics for the entertainment, navigation and other systems. It was also the first model to use Mercedes-AMG V8 engines. In October 2020, Mercedes confirmed it will increase its holding "in stages" from 5% to 20%. In return, Aston Martin will have access to Mercedes-Benz hybrid and electric drivetrain technologies for its future models. 2018–present: Listed on the London Stock Exchange After "completing a turnaround for the once perennially loss-making company that could now be valued at up to 5 billion pounds ($6.4 billion)," and now reporting a full-year pre-tax profit of £87 million (compared with a £163 million loss in 2016) Aston Martin in August 2018 announced plans to float the company at the London Stock Exchange as Aston Martin Lagonda Global Holdings plc. The company was the subject of an initial public offering on the London Stock Exchange on 3 October 2018. In the same year, Aston Martin opened a new vehicle dynamics test and development centre at Silverstone's Stowe Circuit alongside a new HQ in London. In June 2019, the company opened its new factory in St Athan for the production of its first-ever SUV the DBX. The factory was finally completed and officially opened on 6 December 2019. When full production begins in the second quarter of 2020, around 600 people will be employed at the factory, rising to 750 when peak production is reached. On 31 January 2020 it was announced that Canadian billionaire and investor Lawrence Stroll was leading a consortium, Yew Tree Overseas Limited, who will pay £182 million in return for 16.7% stake in the company. The re-structuring includes a £318 million cash infusion through a new rights issue, generating a total of £500 million for the company. Stroll will also be named as chairman, replacing Penny Hughes. Swiss pharmaceutical magnate Ernesto Bertarelli and Mercedes-AMG Petronas F1 team principal and CEO Toto Wolff have also joined the consortium, acquiring 3.4% and 4.8% stakes, respectively. In March 2020, Stroll increased his stake in the company to 25%. On 26 May 2020, Aston Martin announced that Andy Palmer had stepped down as CEO. Tobias Moers of Mercedes-AMG will succeed him starting 1 August, with Keith Stanton as interim chief operating officer. In June 2020, the company announced that it cut out 500 jobs as a result of the poor sales, an outcome of the COVID-19 pandemic lockdown. In March 2021, executive chairman Lawrence Stroll stated that the company plans on building electric vehicles by 2025. In May 2022, Aston Martin named 76-year old Amedeo Felisa as the new chief executive officer, replacing Tobias Moers. Roberto Fedeli was also announced as the new chief technical officer. In November 2020, a communications agency called Clarendon Communications published a report comparing the environmental impact of various powertrain options for cars. After the report received coverage from The Sunday Times and other publications, it emerged that the company had been set up in February that year and was registered under the name of Rebecca Stephens – the wife of James Stephens, who is the government affairs director of Aston Martin Lagonda. Citing a study by Polestar, the report stated that electric vehicles would need to be driven before they would have lower overall emissions than a petrol car. This statement was disputed by electric vehicle researcher Auke Hoekstra, who argued that the report underestimated the emissions from combustion engine vehicles and did not consider the emissions from creating petrol. According to him, a typical EV would need to drive 16,000–18,000 miles (25,700–30,000 km) in order to offset the emissions from manufacture. Bosch and a number of other companies were also involved with the report. In July 2022, Saudi Arabia's Public Investment Fund (PIF) will take a stake in the company through a £78 million equity placing as well as a £575 million separate rights issue, giving it two board seats in the company. After the rights issue, the Saudi fund will have a 16.7% stake in Aston Martin, behind the 18.3% holding by Stroll's Yew Tree consortium while the Mercedes-Benz Group will own 9.7%. In September 2022, Chinese automaker Geely acquired a 7.6% stake in the company. In December 2022, Stroll and the Yew Tree consortium increased their stake in the company to 28.29%. In May 2023, Geely increased its stake to 17%, becoming the third-largest shareholder after the Yew Tree consortium and the Saudi Arabia Public Investment Fund. In June 2023, Aston Martin signed an agreement with Lucid Motors after selecting it to help supply electric motors, powertrains, and battery systems for its upcoming range of fully electric cars. In return, Aston Martin will make cash payments and issue a 3.7percent stake in its company to Lucid, worth $232million in total. In October 2023, Aston Martin announced that it would compete in the FIA World Endurance Championship and IMSA SportsCar Championship in 2025. Notable events In August 2017, a 1956 Aston Martin DBR1/1 sold at a Sotheby's auction at the Pebble Beach, California Concours d'Elegance for US$22,550,000, which made it the most expensive British car ever sold at an auction, according to Sotheby's. The car had previously been driven by Carroll Shelby and Stirling Moss. Other notable Aston Martin models sold at an auction include a 1962 Aston Martin DB4 GT Zagato for US$14,300,000 in New York in 2015, and a 1963 Aston Martin DP215 for US$21,455,000 in August 2018. Models Pre-war cars 1921–1925 Aston Martin Standard Sports 1927–1932 Aston Martin First Series 1929–1932 Aston Martin International 1932–1932 Aston Martin International Le Mans 1932–1934 Aston Martin Le Mans 1933–1934 Aston Martin 12/50 Standard 1934–1936 Aston Martin Mk II 1934–1936 Aston Martin Ulster 1936–1940 Aston Martin 2-litre Speed Models (23 built) The last 8 were fitted with C-type bodywork 1937–1939 Aston Martin 15/98 Post-war cars 1948–1950 Aston Martin 2-Litre Sports (DB1) 1950–1953 Aston Martin DB2 1953–1957 Aston Martin DB2/4 1957–1959 Aston Martin DB Mark III 1958–1963 Aston Martin DB4 1961–1963 Aston Martin DB4 GT Zagato 1963–1965 Aston Martin DB5 1965–1966 Aston Martin Short Chassis Volante 1965–1969 Aston Martin DB6 1967–1972 Aston Martin DBS 1969–1989 Aston Martin V8 1977–1989 Aston Martin V8 Vantage 1986–1990 Aston Martin V8 Zagato 1989–1996 Aston Martin Virage/Virage Volante 1989–2000 Aston Martin Virage 1993–2000 Aston Martin Vantage 1996–2000 Aston Martin V8 Coupe/V8 Volante 1993–2003 Aston Martin DB7/DB7 Vantage 2001–2007 Aston Martin V12 Vanquish/Vanquish S 2002–2003 Aston Martin DB7 Zagato 2002–2004 Aston Martin DB AR1 2004–2016 Aston Martin DB9 2005–2018 Aston Martin V8 and V12 Vantage 2007–2012 Aston Martin DBS V12 2009–2012 Aston Martin One-77 2010–2020 Aston Martin Rapide/Rapide S 2011–2012 Aston Martin Virage/Virage Volante 2011–2013 Aston Martin Cygnet, based on the Toyota iQ 2012–2013 Aston Martin V12 Zagato 2012–2018 Aston Martin Vanquish/Vanquish Volante 2015–2016 Aston Martin Vulcan 2016–present Aston Martin DB11 2018–present Aston Martin Vantage 2018–present Aston Martin DBS Superleggera 2020–present Aston Martin DBX Other 1944 Aston Martin Atom (concept) 1961–1964 Lagonda Rapide 1976–1989 Aston Martin Lagonda 1980 Aston Martin Bulldog (concept) 1993 Lagonda Vignale (concept) 2001 Aston Martin Twenty Twenty (Italdesign concept) 2007 Aston Martin V12 Vantage RS (concept) 2007–2008 Aston Martin V8 Vantage N400 2009 Aston Martin Lagonda SUV (concept) 2010 Aston Martin V12 Vantage Carbon Black Edition 2010 Aston Martin DBS Carbon Black Edition 2013 Aston Martin Rapide Bertone Jet 2+2 (concept) 2013 Aston Martin CC100 Speedster (concept) 2015 Aston Martin DB10 (concept) 2015–2016 Lagonda Taraf 2019 Aston Martin Vanquish Vision (concept) 2019 Aston Martin DBS GT Zagato 2020 Aston Martin V12 Speedster 2021 Aston Martin Victor 2022 Aston Martin DBR22 2023 Aston Martin Valour Current models Aston Martin DB11 Aston Martin DBS Superleggera Aston Martin DBX Aston Martin Vantage Aston Martin Valkyrie Upcoming models Aston Martin DB12 Aston Martin Valhalla Gallery Brand expansion Since 2015, Aston Martin has sought to increase its appeal to women as a luxury lifestyle brand. A female advisory panel was established to adapt the design of the cars to the taste of women. In September 2016, a 37-foot-long Aston Martin speedboat was unveiled called the Aston Martin AM37 powerboat. In September 2017, Aston Martin announced that they had partnered with submarine building company Triton Submarines to build a submarine called Project Neptune. Aston Martin has collaborated with the luxury clothing company Hackett London to deliver items of clothing. In November 2017, Aston Martin unveiled a special limited edition bicycle after collaborating with bicycle manufacturer Storck. Aston Martin and global property developer G&G Business Developments are currently building a 66-storey luxury condominium tower called Aston Martin Residences at 300 Biscayne Boulevard Way in Miami, Florida, which is set for completion in 2021. In July 2018, Aston Martin unveiled the Volante Vision Concept, a luxury concept aircraft with vertical take-off and landing capabilities. Also in July, a Lego version of James Bond's DB5 car was put on sale and an Aston Martin-branded watch was released in collaboration with TAG Heuer. In October 2018, Aston Martin announced it was opening a design and brand studio in Shanghai. Motorsport Aston Martin is currently associated with two different racing organisations. The Aston Martin Formula One team which competes in the Formula One Championship and Aston Martin Racing which currently competes in the FIA World Endurance Championship. Both racing organisations use the Aston Martin brand, but are not directly owned by Aston Martin. The Aston Martin Formula One team is owned by major Aston Martin shareholder Lawrence Stroll and operated by his company AMR GP, while Aston Martin Racing is operated by racing company Prodrive as part of a partnership with Aston Martin. Formula One Aston Martin participated as a Formula One constructor in and entering six races over the two years but failing to score any points. In January 2020, it was announced that the Racing Point F1 Team is due to be rebranded as Aston Martin for the 2021 season, as a result of a funding investment led by Racing Point owner Lawrence Stroll. As part of the rebrand, the team switched their racing colour of BWT pink to a modern iteration of Aston Martin's British racing green. The Aston Martin AMR21 was unveiled in March 2021 and became Aston Martin's first Formula One car after a 61-year absence from the sport. Racing cars (post-war) Aston Martin DB3 (1950–1953) Aston Martin DB3S (1953–1956) Aston Martin DBR1 (1956–1959) Aston Martin DBR2 (1957–1958) Aston Martin DBR3 (1958) Aston Martin DBR4 (1959) Aston Martin DBR5 (1960) Aston Martin DP212 (1962) Aston Martin DP214 (1963) Aston Martin DP215 (1963) Aston Martin RHAM/1 (1976–1979) Aston Martin AMR1 (1989) Aston Martin AMR2 (never raced) Aston Martin DBR9 (2005–2008) Aston Martin DBRS9 (2005–2008) Aston Martin V8 Vantage N24 (2006–2008) Aston Martin V8 Vantage Rally GT (2006–2010) Aston Martin V8 Vantage GT2 (2008–2017) Aston Martin V8 Vantage GT4 (2008–2018) Aston Martin DBR1-2 (2009) Aston Martin AMR-One (2011) Aston Martin Vantage GTE (2018–) Aston Martin AMR21 (2021) Aston Martin AMR22 (2022) Aston Martin AMR23 (2023) Aston Martin-powered racing cars Cooper-Aston Martin (1963) Lola T70-Aston Martin (1967) Aston Martin DPLM (1980–1982) Nimrod NRA/C2-Aston Martin (1982–1984) Aston Martin EMKA C83/1 and C84/1 (1983–1985) Cheetah G604-Aston Martin Lola B08/60-Aston Martin (2008–) 24 Hours of Le Mans finishes Sponsorships Aston Martin sponsors 2. Bundesliga club 1860 Munich. See also Aston Martin Heritage Trust Museum Aston Martin Owners Club List of car manufacturers of the United Kingdom References External links 1913 establishments in England 2018 initial public offerings Automotive companies of England British racecar constructors British royal warrant holders Car brands Car manufacturers of the United Kingdom Companies based in Warwickshire Companies listed on the London Stock Exchange English brands Luxury motor vehicle manufacturers Motor vehicle manufacturers of England Premier Automotive Group Sports car manufacturers Vehicle manufacturing companies established in 1913
2376
https://en.wikipedia.org/wiki/Abdul%20Rashid%20Dostum
Abdul Rashid Dostum
Abdul Rashid Dostum ( ; ; Uzbek Latin: , Uzbek Cyrillic: , ; born 25 March 1954) is an Afghan exiled politician, former Marshal in the Afghan National Army, founder and leader of the political party Junbish-e Milli. Dostum was a major army commander in the communist government during the Soviet–Afghan War, and in 2001 was the key indigenous ally to U.S. Special Forces and the CIA during the campaign to topple the Taliban government. He is one of the most powerful warlords since the beginning of the Afghan wars, known for siding with winners during different wars. Dostum has also referred to as a Kingmaker due to his significant role in Afghan politics. An ethnic Uzbek from a peasant family in Jawzjan province, Dostum joined the People's Democratic Party of Afghanistan (PDPA) as a teenager before enlisting in the Afghan National Army and training as a paratrooper, serving in his native region around Sheberghan. Soon with the start of the Soviet–Afghan War, Dostum commanded a KHAD militia and eventually gained a reputation, often defeating mujahideen commanders in northern Afghanistan and even persuading some to defect to the communist cause. Much of the country's north was in strong government control as a result. He achieved several promotions in the army and was honored as a "Hero of Afghanistan" by President Mohammed Najibullah in 1988. By this time he was commanding up to 45,000 troops in the region under his responsibility. Following the dissolution of the Soviet Union, Dostum played a central role in the collapse of Najibullah's government by "defecting" to the mujahideen; the division-sized loyal forces he commanded in the north became an independent paramilitary of his newly founded party called Junbish-e Milli. He allied with Ahmad Shah Massoud and together they captured Kabul, before another civil war loomed. Initially supporting the new government of Burhanuddin Rabbani, he switched sides in 1994 by allying with Gulbuddin Hekmatyar, but he backed Rabbani again by 1996. During this time he remained in control of the country's north which functioned as a relatively stable proto-state, but remained a loose partner of Massoud in the Northern Alliance. A year later, Mazar-i-Sharif was overrun by his former aide Abdul Malik Pahlawan, resulting in a battle in which he regained control. In 1998, the city was overrun by the Taliban and Dostum fled the country until returning to Afghanistan in 2001, joining the Northern Alliance forces after the US invasion and leading his loyal faction in the Fall of Mazar-i-Sharif. After the fall of the Taliban, he joined interim president Hamid Karzai's administration as Deputy Defense Minister and later served as chairman Joint Chiefs of Staff of the Afghan Army, a role often viewed as ceremonial. His militia feuded with forces loyal to general Atta Muhammad Nur. Dostum was a candidate in the 2004 elections, and was an ally of victorious Karzai in the 2009 elections. From 2011, he was part of the leadership council of the National Front of Afghanistan along with Ahmad Zia Massoud and Mohammad Mohaqiq. He served as Vice President of Afghanistan in Ashraf Ghani's administration from 2014 to 2020. In 2020 he promoted to marshal rank in after a political agreement between Ghani and former Chief Executive Abdullah Abdullah. Dostum is a controversial figure in Afghanistan. He is seen as a capable and fierce military leader and remains wildly popular among the Uzbek community in the country; Many of his supporters call him "Pasha" (پاشا), an honorable Uzbek/Turkic term. However he has also been widely accused of committing atrocities and war crimes, most notoriously the suffocation of up to 1,000 Taliban fighters in the Dasht-i-Leili massacre and he was widely feared among the populace. In 2018, the International Criminal Court (ICC) was reported to be considering launching an inquiry into whether Dostum had engaged in war crimes in Afghanistan. Early life Dostum was born in 1954 in Khwaja Du Koh near Sheberghan in Jowzjan province, Afghanistan. Coming from an impoverished ethnic Uzbek family, he received a very basic traditional education as he was forced to drop out of school at a young age. From there, he took up work in the village's major gas fields. Career Dostum began working in 1970 in a state-owned gas refinery in Sheberghan, participating in union politics, as the new government started to arm the staff of the workers in the oil and gas refineries. The reason for this was to create "groups for the defense of the revolution". Because of the new communist ideas entering Afghanistan in the 1970s, he enlisted in the Afghan National Army in 1978. Dostum received his basic military training in Jalalabad. His squadron was deployed in the rural areas around Sheberghan, under the auspices of the Ministry of National Security. As a Parcham member of the People's Democratic Party of Afghanistan (PDPA), he was exiled by the purge of the party's Khalqist leaders, living in Peshawar, Pakistan for a while. After the Soviet invasion (Operation Storm-333) and installation of Babrak Karmal as head of state, Dostum returned to Afghanistan where he started commanding a local pro-government militia in his native Jawzjan Province. Soviet–Afghan War By the mid-1980s, he commanded around 20,000 militia men and controlled the northern provinces of Afghanistan. While the unit recruited throughout Jowzjan and had a relatively broad base, many of its early troops and commanders came from Dostum's home village. He left the army after the purge of Parchamites, but returned after the Soviet occupation began. During the Soviet–Afghan War, Dostum was commanding a militia battalion to fight and rout mujahideen forces; he had been appointed an officer due to prior military experience. This eventually became a regiment and later became incorporated into the defense forces as the 53rd Infantry Division. Dostum and his new division reported directly to President Mohammad Najibullah. Later on he became the commander of the military unit 374 in Jowzjan. He defended the Soviet-backed Afghan government against the mujahideen forces throughout the 1980s. While he was only a regional commander, he had largely raised his forces by himself. The Jowzjani militia Dostum controlled was one of the few in the country which was able to be deployed outside its own region. They were deployed in Kandahar in 1988 when Soviet forces were withdrawing from Afghanistan. Due to his efforts in the army, Dostum was awarded the title "Hero of the Republic of Afghanistan" by President Najibullah. Civil war and northern Afghanistan autonomous state Dostum's men would become an important force in the fall of Kabul in 1992, with Dostum deciding to defect from Najibullah and allying himself with opposition commanders Ahmad Shah Massoud and Sayed Jafar Naderi, the head of the Isma'ili community, and together they captured the capital city. With the help of fellow defectors Mohammad Nabi Azimi and Abdul Wakil, his forces entered Kabul by air in the afternoon of 14 April. He and Massoud fought in a coalition against Gulbuddin Hekmatyar. Massoud and Dostum's forces joined to defend Kabul against Hekmatyar. Some 4,000–5,000 of his troops, units of his Sheberghan-based 53rd Division and Balkh-based Guards Division, garrisoning Bala Hissar fort, Maranjan Hill and Khwaja Rawash Airport, where they stopped Najibullah from entering to flee. Dostum then left Kabul for his northern stronghold Mazar-i-Sharif, where he ruled, in effect, an independent region (or 'proto-state'), often referred as the Northern Autonomous Zone. He printed his own Afghan currency, ran a small airline named Balkh Air, and formed relations with countries like Uzbekistan effectively creating his own proto-state with an army of up to 40,000 men, and with tanks supplied by Uzbekistan and Russia. While the rest of the country was in chaos, his region remained prosperous and functional, and it won him the support from people of all ethnic groups. Many people fled to his territory to escape the violence and fundamentalism imposed by the Taliban later on. In 1994, Dostum allied himself with Gulbuddin Hekmatyar against the government of Burhanuddin Rabbani and Ahmad Shah Massoud, but in 1995 sided with the government again. Taliban era Following the rise of the Taliban and their capture of Kabul, Dostum aligned himself with the Northern Alliance (United Front) against the Taliban. The Northern Alliance was assembled in late 1996 by Dostum, Massoud and Karim Khalili against the Taliban. At this point he is said to have had a force of some 50,000 men supported by both aircraft and tanks. Much like other Northern Alliance leaders, Dostum also faced infighting within his group and was later forced to surrender his power to General Abdul Malik Pahlawan. Malik entered into secret negotiations with the Taliban, who promised to respect his authority over much of northern Afghanistan, in exchange for the apprehension of Ismail Khan, one of their enemies. Accordingly, on 25 May 1997, Malik arrested Khan, handed him over and let the Taliban enter Mazar-e-Sharif, giving them control over most of northern Afghanistan. Because of this, Dostum was forced to flee to Turkey. However, Malik soon realized that the Taliban were not sincere with their promises as he saw his men being disarmed. He then rejoined the Northern Alliance, and turned against his erstwhile allies, driving them from Mazar-e-Sharif. In October 1997, Dostum returned from exile and retook charge. After Dostum briefly regained control of Mazar-e-Sharif, the Taliban returned in 1998 and he again fled to Turkey. Operation Enduring Freedom Dostum returned to Afghanistan in May 2001 to open up a new front before the U.S.-led campaign against the Taliban joined him, along with Commander Massoud, Ismail Khan and Mohammad Mohaqiq. On 17 October 2001, the CIA's eight-man Team Alpha, including Johnny Micheal Spann landed in the Dar-e-Suf to link up with Dostum. Three days later, the 12 members of Operational Detachment Alpha (ODA) 595 landed to join forces with Dostum and Team Alpha. Dostum, the Tajik commander Atta Muhammad Nur and their American allies defeated Taliban forces and recaptured Mazar-i-Sharif on 10 November 2001. On 24 November 2001, 15,000 Taliban soldiers were due to surrender after the Siege of Kunduz to American and Northern Alliance forces. Instead, 400 Al-Qaeda prisoners arrived just outside Mazar-i-Sharif. After they surrendered to Dostum, they were transferred to the 19th century garrison fortress, Qala-i-Jangi. The next day, while being questioned by CIA officers Spann and David Tyson, they used concealed weapons to revolt, triggering what became the Battle of Qala-i-Jangi against the guards. The uprising was finally brought under control after six days. Dasht-i-Leili massacre Dostum has been accused by Western journalists of responsibility for the suffocating or otherwise killing of Taliban prisoners in December 2001, with the number of victims estimated as 2,000. In 2009, Dostum denied the accusations and US President Obama ordered an investigation into the massacre. Karzai administration In the aftermath of Taliban's removal from northern Afghanistan, forces loyal to Dostum frequently clashed with Tajik forces loyal to Atta Muhammad Nur. Atta's men kidnapped and killed a number of Dostum's men, and constantly agitated to gain control of Mazar-e-Sharif. Through the political mediations of the Karzai administration, the International Security Assistance Force (ISAF) and the United Nations, the Dostum-Atta feud gradually declined, leading to their alignment in a new political party. Dostum served as deputy defense minister the early period of the Karzai administration. On 20 May 2003, Dostum narrowly escaped an assassination attempt. He was often residing outside Afghanistan, mainly in Turkey. In February 2008, he was suspended after the apparent kidnapping and torture of a political rival. Time in Turkey Some media reports in 2008 stated earlier that Dostum was "seeking political asylum" in Turkey while others said he was exiled. One Turkish media outlet said Dostum was visiting after flying there with then Turkey's Foreign Minister Ali Babacan during a meeting of the Organization for Security and Cooperation in Europe (OSCE). On 16 August 2009, Dostum was asked to return from exile to Afghanistan to support President Hamid Karzai in his bid for re-election. He later flew by helicopter to his northern stronghold of Sheberghan, where he was greeted by thousands of his supporters in the local stadium. He subsequently made overtures to the United States, promising he could "destroy the Taliban and al Qaeda" if supported by the U.S., saying that "the U.S. needs strong friends like Dostum." Ghani administration On 7 October 2013, the day after filing his nomination for the 2014 general elections as running mate of Ashraf Ghani, Dostum issued a press statement that some news media were willing to welcome as "apologies": "Many mistakes were made during the civil war (…) It is time we apologize to the Afghan people who were sacrificed due to our negative policies (…) I apologize to the people who suffered from the violence and civil war (…)". Dostum was directly chosen as First Vice President of Afghanistan in the April–June 2014 Afghan presidential election, next to Ashraf Ghani as president and Sarwar Danish as second vice president. In July 2016, Human Rights Watch accused Abdul Rashid Dostum's National Islamic Movement of Afghanistan of killing, abusing and looting civilians in the northern Faryab Province during June. Militia forces loyal to Dostum stated that the civilians they targeted – at least 13 killed and 32 wounded – were supporters of the Taliban. In November 2016, at a buzkashi match, he punched his political rival Ahmad Ischi, and then his bodyguards beat Ischi. In 2017, he was accused of having Ischi kidnapped in that incident and raped with a gun on camera during a five-day detention, claims that Dostum denies but that nevertheless forced him into exile in Turkey. On 26 July 2018, he narrowly escaped a suicide bombing by ISIL-KP as he returned to Afghanistan at Kabul airport. Just after Dostum's convoy departed the airport, an attacker armed with a suicide vest bombed a crowd of several hundred people celebrating his return at the entrance to the airport. The attack killed 14 and injured 50, including civilians and armed security. On 30 March 2019, Dostum again escaped an expected assassination attempt while traveling from Mazar-e-Sharif to Jawzjan Province, though two of his bodyguards were killed. The Taliban claimed responsibility for the attack, the second in eight months. On 11 August 2021 during the Taliban's nationwide offensive, Dostum, along with Atta Muhammad Nur, led the government's defence of the city of Mazar-i-Sharif. Three days later, they fled across Hairatan to Uzbekistan. Atta Nur claimed that they were forced to flee due to a "conspiracy". Both men later pled allegiance to the National Resistance Front of Afghanistan, the remaining remnants of the collapsed Islamic Republic of Afghanistan. Dostum, Atta, Yunus Qanuni, Abdul Rasul Sayyaf and some other political figures formed the Supreme Council of National Resistance of the Islamic Republic of Afghanistan in opposition to the new Taliban regime in October 2021. Political and social views Dostum is considered to be liberal and somewhat leftist. Being ethnic Uzbek, he has worked on the battlefield with leaders from all other major ethnic groups, Hazaras, Tajiks and Pashtuns. When Dostum was ruling his northern Afghanistan proto-state before the Taliban took over in 1998, women were able to go about unveiled, girls were allowed to go to school and study at the University of Balkh, cinemas showed Indian films, music played on television, and Russian vodka and German beer were openly available: activities which were all banned by the Taliban. He viewed the ISAF forces attempt to crush the Taliban as ineffective and has gone on record saying in 2007 that he could mop up the Taliban "in six months" if allowed to raise a 10,000 strong army of Afghan veterans. As of 2007, senior Afghan government officials did not trust Dostum as they were concerned that he might be secretly rearming his forces. Personal life Dostum is more than tall and has been described as "beefy". He generally favors to wear a camouflage Soviet style military uniform, and has had a trademark bushy moustache. Dostum was married to a woman named Khadija. According to Brian Glyn Williams, Khadija had an accidental death in the 1990s which broke Dostum as he "really loved his wife". Dostum eventually remarried after Khadija's death. He named one of his sons Mustafa Kamal, after the founder of the modern Turkish republic, Mustafa Kemal Ataturk. Dostum has spent a considerable amount of time in Turkey, and some of his family reside there. Dostum is known to drink alcohol, a rarity in Afghanistan, and apparently a fan of Russian vodka. He reportedly suffered from diabetes. In 2014 when he became vice president, Dostum reportedly gave up drinking for healthy meals and morning jogs. In popular culture Navid Negahban portrays Dostum in the 2018 film 12 Strong. Dostum appears as a playing card in the board game A Distant Plain. See also Abdul Jabar Qahraman Afghan Civil War (1989–1992) Afghan Civil War (1992–1996) References Bibliography External links General Abdul Rashid Dostum's official website Article on Abdul Rashid Dostum on Islamic Republic Of Afghanistan (.com) BBC online profile Biography about Dostum CNN Presents: House of War Afghanistan Mass Grave: The Dasht-e Leili War Crimes Investigation As possible Afghan war-crimes evidence removed, U.S. silent Obama Calls for Probe into 2001 Massacre of Suspected Taliban POWs by US-Backed Afghan Warlord – video by Democracy Now! Eyewitness account from National Geographic war reporter Robert Young Pelton 1954 births Living people Afghan military personnel Vice presidents of Afghanistan Afghan warlords Afghanistan conflict (1978–present) People of the Soviet–Afghan War Afghan communists National Islamic Movement of Afghanistan politicians People from Jowzjan Province Afghan expatriates in Turkey Afghan exiles Afghan expatriates in Pakistan People of the Democratic Republic of Afghanistan Islamic State of Afghanistan 20th-century Afghan politicians Afghan Uzbek politicians Afghan military officers Marshals
2380
https://en.wikipedia.org/wiki/Accelerated%20Graphics%20Port
Accelerated Graphics Port
Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express. Advantages over PCI AGP is a superset of the PCI standard, designed to overcome PCI's limitations in serving the requirements of the era's high-performance graphics cards. The primary advantage of AGP is that it doesn't share the PCI bus, providing a dedicated, point-to-point pathway between the expansion slot(s) and the motherboard chipset. The direct connection also allows for higher clock speeds. The second major change is the use of split transactions, wherein the address and data phases are separated. The card may send many address phases so the host can process them in order, avoiding any long delays caused by the bus being idle during read operations. Third, PCI bus handshaking is simplified. Unlike PCI bus transactions whose length is negotiated on a cycle-by-cycle basis using the FRAME# and STOP# signals, AGP transfers are always a multiple of 8 bytes long, with the total length included in the request. Further, rather than using the IRDY# and TRDY# signals for each word, data is transferred in blocks of four clock cycles (32 words at AGP 8× speed), and pauses are allowed only between blocks. Finally, AGP allows (mandatory only in AGP 3.0) sideband addressing, meaning that the address and data buses are separated so the address phase does not use the main address/data (AD) lines at all. This is done by adding an extra 8-bit "SideBand Address" bus over which the graphics controller can issue new AGP requests while other AGP data is flowing over the main 32 address/data (AD) lines. This results in improved overall AGP data throughput. This great improvement in memory read performance makes it practical for an AGP card to read textures directly from system RAM, while a PCI graphics card must copy it from system RAM to the card's video memory. System memory is made available using the graphics address remapping table (GART), which apportions main memory as needed for texture storage. The maximum amount of system memory available to AGP is defined as the AGP aperture. History The AGP slot first appeared on x86-compatible system boards based on Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors. Intel introduced AGP support with the i440LX Slot 1 chipset on August 26, 1997, and a flood of products followed from all the major system board vendors. The first Socket 7 chipsets to support AGP were the VIA Apollo VP3, SiS 5591/5592, and the ALI Aladdin V. Intel never released an AGP-equipped Socket 7 chipset. FIC demonstrated the first Socket 7 AGP system board in November 1997 as the FIC PA-2012 based on the VIA Apollo VP3 chipset, followed very quickly by the EPoX P55-VP3 also based on the VIA VP3 chipset which was first to market. Early video chipsets featuring AGP support included the Rendition Vérité V2200, 3dfx Voodoo Banshee, Nvidia RIVA 128, 3Dlabs PERMEDIA 2, Intel i740, ATI Rage series, Matrox Millennium II, and S3 ViRGE GX/2. Some early AGP boards used graphics processors built around PCI and were simply bridged to AGP. This resulted in the cards benefiting little from the new bus, with the only improvement used being the 66 MHz bus clock, with its resulting doubled bandwidth over PCI, and bus exclusivity. Intel's i740 was explicitly designed to exploit the new AGP feature set; in fact it was designed to texture only from AGP memory, making PCI versions of the board difficult to implement (local board RAM had to emulate AGP memory.) Microsoft first introduced AGP support into Windows 95 OEM Service Release 2 (OSR2 version 1111 or 950B) via the USB SUPPLEMENT to OSR2 patch. After applying the patch the Windows 95 system became Windows 95 version 4.00.950 B. The first Windows NT-based operating system to receive AGP support was Windows NT 4.0 with Service Pack 3, introduced in 1997. Linux support for AGP enhanced fast data transfers was first added in 1999 with the implementation of the AGPgart kernel module. Later use With the increasing adoption of PCIe, graphics cards manufacturers continued to produce AGP cards as the standard became obsolete. As GPUs began to be designed to connect to PCIe, an additional PCIe-to-AGP bridge-chip was required to create an AGP-compatible graphics card. The inclusion of a bridge, and the need for a separate AGP card design, incurred additional board costs. The GeForce 6600 and ATI Radeon X800 XL, released during 2004–2005, were the first bridged cards. In 2009 AGP cards from Nvidia had a ceiling of the GeForce 7 Series. In 2011 DirectX 10-capable AGP cards from AMD vendors (Club 3D, HIS, Sapphire, Jaton, Visiontek, Diamond, etc.) included the Radeon HD 2400, 3450, 3650, 3850, 4350, 4650, and 4670. The HD 5000 AGP series mentioned in the AMD Catalyst software was never available. There were many problems with the AMD Catalyst 11.2 - 11.6 AGP hotfix drivers under Windows 7 with the HD 4000 series AGP video cards; use of 10.12 or 11.1 AGP hotfix drivers is the recommended workaround. Several of the vendors listed above make available past versions of the AGP drivers. By 2010, no new motherboard chipsets supported AGP and few new motherboards had AGP slots, however some continued to be produced with older AGP-supporting chipsets. In 2016, Windows 10 version 1607 dropped support for AGP. Possible future removal of support for AGP from open source Linux kernel drivers was considered in 2020. Versions Intel released "AGP specification 1.0" in 1997. It specified 3.3 V signals and 1× and 2× speeds. Specification 2.0 documented 1.5 V signaling, which could be used at 1×, 2× and the additional 4× speed and 3.0 added 0.8 V signaling, which could be operated at 4× and 8× speeds. (1× and 2× speeds are physically possible, but were not specified.) Available versions are listed in the adjacent table. AGP version 3.5 is only publicly mentioned by Microsoft under Universal Accelerated Graphics Port (UAGP), which specifies mandatory supports of extra registers once marked optional under AGP 3.0. Upgraded registers include PCISTS, CAPPTR, NCAPID, AGPSTAT, AGPCMD, NISTAT, NICMD. New required registers include APBASELO, APBASEHI, AGPCTRL, APSIZE, NEPG, GARTLO, GARTHI. There are various physical interfaces (connectors); see the Compatibility section. Official extensions AGP Pro An official extension for cards that required more electrical power, with a longer slot with additional pins for that purpose. AGP Pro cards were usually workstation-class cards used to accelerate professional computer-aided design applications employed in the fields of architecture, machining, engineering, simulations, and similar fields. 64-bit AGP A 64-bit channel was once proposed as an optional standard for AGP 3.0 in draft documents, but it was dropped in the final version of the standard. The standard allows 64-bit transfer for AGP8× reads, writes, and fast writes; 32-bit transfer for PCI operations. Unofficial variations A number of non-standard variations of the AGP interface have been produced by manufacturers. Internal AGP interface Ultra-AGP, Ultra-AGPII It is an internal AGP interface standard used by SiS for the north bridge controllers with integrated graphics. The original version supports same bandwidth as AGP 8×, while Ultra-AGPII has maximum 3.2GB/s bandwidth. PCI-based AGP ports AGP Express Not a true AGP interface, but allows an AGP card to be connected over the legacy PCI bus on a PCI Express motherboard. It is a technology used on motherboards made by ECS, intended to allow an existing AGP card to be used in a new motherboard instead of requiring a PCIe card to be obtained (since the introduction of PCIe graphics cards few motherboards provide AGP slots). An "AGP Express" slot is basically a PCI slot (with twice the electrical power) with an AGP connector. It offers backward compatibility with AGP cards, but provides incomplete support (some AGP cards do not work with AGP Express) and reduced performance—the card is forced to use the shared PCI bus at its lower bandwidth, rather than having exclusive use of the faster AGP. AGI The ASRock Graphics Interface (AGI) is a proprietary variant of the Accelerated Graphics Port (AGP) standard. Its purpose is to provide AGP-support for ASRock motherboards that use chipsets lacking native AGP support. However, it is not fully compatible with AGP, and several video card chipsets are known not to be supported. AGX The EPoX Advanced Graphics eXtended (AGX) is another proprietary AGP variant with the same advantages and disadvantages as AGI. User manuals recommend not using AGP 8× ATI cards with AGX slots. XGP The Biostar Xtreme Graphics Port is another AGP variant, also with the same advantages and disadvantages as AGI and AGX. PCIe based AGP ports AGR The Advanced Graphics Riser is a variation of the AGP port used in some PCIe motherboards made by MSI to offer limited backwards compatibility with AGP. It is, effectively, a modified PCIe slot allowing for performance comparable to an AGP 4×/8× slot, but does not support all AGP cards; the manufacturer published a list of some cards and chipsets that work with the modified slot. Compatibility AGP cards are backward and forward compatible within limits. 1.5 V-only keyed cards will not go into 3.3 V slots and vice versa, though "Universal" cards exist which will fit into either type of slot. There are also unkeyed "Universal" slots that will accept either type of card. When an AGP Universal card is plugged-into an AGP Universal slot, only the 1.5 V portion of the card is used. Some cards, like Nvidia's GeForce 6 series (except the 6200) or ATI's Radeon X800 series, only have keys for 1.5 V to prevent them from being installed in older mainboards without 1.5 V support. Some of the last modern cards with 3.3 V support were: the Nvidia GeForce FX series (FX 5200, FX 5500, FX 5700, some FX 5800, FX 5900 and some FX 5950) certain GeForce 6 Series and 7 series (few cards were made with 3.3v support except for 6200 where 3.3v support was common) some GeForce 6200/6600/6800 and GeForce 7300/7600/7800/7900/7950 cards (really uncommon compared to their AGP 1.5v only versions) the ATI Radeon 9500/9700/9800 (R300/R350) (but not 9600/9800 (R360/RV360)). AGP Pro cards will not fit into standard slots, but standard AGP cards will work in a Pro slot. Motherboards equipped with a Universal AGP Pro slot will accept a 1.5 V or 3.3 V card in either the AGP Pro or standard AGP configuration, a Universal AGP card, or a Universal AGP Pro card. Some cards incorrectly have dual notches, and some motherboards incorrectly have fully open slots, allowing a card to be plugged into a slot that does not support the correct signaling voltage, which may damage card or motherboard. Some incorrectly designed older 3.3 V cards have the 1.5 V key. There are some proprietary systems incompatible with standard AGP; for example, Apple Power Macintosh computers with the Apple Display Connector (ADC) have an extra connector which delivers power to the attached display. Some cards designed to work with a specific CPU architecture (e.g., PC, Apple) may not work with others due to firmware issues. Mark Allen of Playtools.com made the following comments regarding Practical AGP Compatibility for AGP 3.0 and AGP 2.0: Power consumption Actual power supplied by an AGP slot depends upon the card used. The maximum current drawn from the various rails is given in the specifications for the various versions. For example, if maximum current is drawn from all supplies and all voltages are at their specified upper limits, an AGP 3.0 slot can supply up to 48.25 watts; this figure can be used to specify a power supply conservatively, but in practice a card is unlikely ever to draw more than 40 W from the slot, with many using less. AGP Pro provides additional power up to 110 W. Many AGP cards had additional power connectors to supply them with more power than the slot could provide. Protocol An AGP bus is a superset of a 66 MHz conventional PCI bus and, immediately after reset, follows the same protocol. The card must act as a PCI target, and optionally may act as a PCI master. (AGP 2.0 added a "fast writes" extension which allows PCI writes from the motherboard to the card to transfer data at higher speed.) After the card is initialized using PCI transactions, AGP transactions are permitted. For these, the card is always the AGP master and the motherboard is always the AGP target. The card queues multiple requests which correspond to the PCI address phase, and the motherboard schedules the corresponding data phases later. An important part of initialization is telling the card the maximum number of outstanding AGP requests which may be queued at a given time. AGP requests are similar to PCI memory read and write requests, but use a different encoding on command lines C/BE[3:0] and are always 8-byte aligned; their starting address and length are always multiples of 8 bytes (64 bits). The three low-order bits of the address are used instead to communicate the length of the request. Whenever the PCI GNT# signal is asserted, granting the bus to the card, three additional status bits ST[2:0] indicate the type of transfer to be performed next. If the bits are 0xx, a previously queued AGP transaction's data is to be transferred; if the three bits are 111, the card may begin a PCI transaction or (if sideband addressing is not in use) queue a request in-band using PIPE#. AGP command codes Like PCI, each AGP transaction begins with an address phase, communicating an address and 4-bit command code. The possible commands are different from PCI, however: 000p Read Read 8×(AD[2:0]+1) = 8, 16, 24, ..., 64 bytes. The least significant bit p is 0 for low-priority, 1 for high. 001x (reserved): 010p Write Write 8×(AD[2:0]+1) = 8–64 bytes. 011x (reserved): 100p Long read Read 32×(AD[2:0]+1) = 32, 64, 96, ..., 256 bytes. This is the same as a read request, but the length is multiplied by four. 1010 Flush Force previously written data to memory, for synchronization. This acts as a low-priority read, taking a queue slot and returning 8 bytes of random data to indicate completion. The address and length supplied with this command are ignored. 1011 (reserved): 1100 Fence This acts as a memory fence, requiring that all earlier AGP requests complete before any following requests. Ordinarily, for increased performance, AGP uses a very weak consistency model, and allows a later write to pass an earlier read. (E.g. after sending "write 1, write 2, read, write 3, write 4" requests, all to the same address, the read may return any value from 2 to 4. Only returning 1 is forbidden, as writes must complete before following reads.) This operation does not require any queue slots. 1101 Dual address cycle When making a request to an address above 232, this is used to indicate that a second address cycle will follow with additional address bits. This operates like a regular PCI dual address cycle; it is accompanied by the low-order 32 bits of the address (and the length), and the following cycle includes the high 32 address bits and the desired command. The two cycles make one request, and take only one slot in the request queue. This request code is not used with side-band addressing. 111x (reserved): AGP 3.0 dropped high-priority requests and the long read commands, as they were little used. It also mandated side-band addressing, thus dropping the dual address cycle, leaving only four request types: low-priority read (0000), low-priority write (0100), flush (1010) and fence (1100). In-band AGP requests using PIPE# To queue a request in-band, the card must request the bus using the standard PCI REQ# signal, and receive GNT# plus bus status ST[2:0] equal to 111. Then, instead of asserting FRAME# to begin a PCI transaction, the card asserts the PIPE# signal while driving the AGP command, address, and length on the C/BE[3:0], AD[31:3] and AD[2:0] lines, respectively. (If the address is 64 bits, a dual address cycle similar to PCI is used.) For every cycle that PIPE# is asserted, the card sends another request without waiting for acknowledgement from the motherboard, up to the configured maximum queue depth. The last cycle is marked by deasserting REQ#, and PIPE# is deasserted on the following idle cycle. Side-band AGP requests using SBA[7:0] If side-band addressing is supported and configured, the PIPE# signal is not used. (And the signal is re-used for another purpose in the AGP 3.0 protocol, which requires side-band addressing.) Instead, requests are broken into 16-bit pieces which are sent as two bytes across the SBA bus. There is no need for the card to ask permission from the motherboard; a new request may be sent at any time as long as the number of outstanding requests is within the configured maximum queue depth. The possible values are: 0aaa aaaa aaaa alll Queue a request with the given low-order address bits A[14:3] and length 8×(L[2:0]+1). The command and high-order bits are as previously specified. Any number of requests may be queued by sending only this pattern, as long as the command and higher address bits remain the same. 10cc ccra aaaa aaaa Use command C[3:0] and address bits A[23:15] for future requests. (Bit R is reserved.) This does not queue a request, but sets values that will be used in all future queued requests. 110r aaaa aaaa aaaa Use address bits A[35:24] for future requests. 1110 aaaa aaaa aaaa Use address bits A[47:36] for future requests. 1111 0xxx, 1111 10xx, 1111 110x Reserved, do not use. 1111 1110 Synchronization pattern used when starting the SBA bus after an idle period. 1111 1111 No operation; no request. At AGP 1× speed, this may be sent as a single byte and a following 16-bit side-band request started one cycle later. At AGP 2× and higher speeds, all side-band requests, including this NOP, are 16 bits long. Sideband address bytes are sent at the same rate as data transfers, up to 8× the 66 MHz basic bus clock. Sideband addressing has the advantage that it mostly eliminates the need for turnaround cycles on the AD bus between transfers, in the usual case when read operations greatly outnumber writes. AGP responses While asserting GNT#, the motherboard may instead indicate via the ST bits that a data phase for a queued request will be performed next. There are four queues: two priorities (low- and high-priority) for each of reads and writes, and each is processed in order. Obviously, the motherboard will attempt to complete high-priority requests first, but there is no limit on the number of low-priority responses which may be delivered while the high-priority request is processed. For each cycle when the GNT# is asserted and the status bits have the value 00p, a read response of the indicated priority is scheduled to be returned. At the next available opportunity (typically the next clock cycle), the motherboard will assert TRDY# (target ready) and begin transferring the response to the oldest request in the indicated read queue. (Other PCI bus signals like FRAME#, DEVSEL# and IRDY# remain deasserted.) Up to four clock cycles worth of data (16 bytes at AGP 1× or 128 bytes at AGP 8×) are transferred without waiting for acknowledgement from the card. If the response is longer than that, both the card and motherboard must indicate their ability to continue on the third cycle by asserting IRDY# (initiator ready) and TRDY#, respectively. If either one does not, wait states will be inserted until two cycles after they both do. (The value of IRDY# and TRDY# at other times is irrelevant and they are usually deasserted.) The C/BE# byte enable lines may be ignored during read responses, but are held asserted (all bytes valid) by the motherboard. The card may also assert the RBF# (read buffer full) signal to indicate that it is temporarily unable to receive more low-priority read responses. The motherboard will refrain from scheduling any more low-priority read responses. The card must still be able to receive the end of the current response, and the first four-cycle block of the following one if scheduled, plus any high-priority responses it has requested. For each cycle when GNT# is asserted and the status bits have the value 01p, write data is scheduled to be sent across the bus. At the next available opportunity (typically the next clock cycle), the card will assert IRDY# (initiator ready) and begin transferring the data portion of the oldest request in the indicated write queue. If the data is longer than four clock cycles, the motherboard will indicate its ability to continue by asserting TRDY# on the third cycle. Unlike reads, there is no provision for the card to delay the write; if it didn't have the data ready to send, it shouldn't have queued the request. The C/BE# lines are used with write data, and may be used by the card to select which bytes should be written to memory. The multiplier in AGP 2×, 4× and 8× indicates the number of data transfers across the bus during each 66 MHz clock cycle. Such transfers use source synchronous clocking with a "strobe" signal (AD_STB[0], AD_STB[1], and SB_STB) generated by the data source. AGP 4× adds complementary strobe signals. Because AGP transactions may be as short as two transfers, at AGP 4× and 8× speeds it is possible for a request to complete in the middle of a clock cycle. In such a case, the cycle is padded with dummy data transfers (with the C/BE# byte enable lines held deasserted). Connector pinout The AGP connector contains almost all PCI signals, plus several additions. The connector has 66 contacts on each side, although 4 are removed for each keying notch. Pin 1 is closest to the I/O bracket, and the B and A sides are as in the table, looking down at the motherboard connector. Contacts are spaced at 1 mm intervals, however they are arranged in two staggered vertical rows so that there is 2 mm space between pins in each row. Odd-numbered A-side contacts, and even-numbered B-side contacts are in the lower row (1.0 to 3.5 mm from the card edge). The others are in the upper row (3.7 to 6.0 mm from the card edge). PCI signals omitted are: The −12 V supply The third and fourth interrupt requests (INTC#, INTD#) The JTAG pins (TRST#, TCK, TMS, TDI, TDO) The SMBus pins (SMBCLK, SMBDAT) The IDSEL pin; an AGP card connects AD[16] to IDSEL internally The 64-bit extension (REQ64#, ACK64#) and 66 MHz (M66EN) pins The LOCK# pin for locked transaction support Signals added are: Data strobes AD_STB[1:0] (and AD_STB[1:0]# in AGP 2.0) The sideband address bus SBA[7:0] and SB_STB (and SB_STB# in AGP 2.0) The ST[2:0] status signals USB+ and USB− (and OVERCNT# in AGP 2.0) The PIPE# signal (removed in AGP 3.0 for 0.8 V signaling) The RBF# signal The TYPEDET#, Vregcg and Vreggc pins (AGP 2.0 for 1.5V signaling) The DBI_HI and DBI_LO signals (AGP 3.0 for 0.8 V signaling only) The GC_DET# and MB_DET# pins (AGP 3.0 for 0.8V signaling) The WBF# signal (AGP 3.0 fast write extension) See also List of device bandwidths Serial Digital Video Out for ADD DVI adapter cards AGP Inline Memory Module Notes References External links Archived AGP Implementors Forum AGP specifications: 1.0, 2.0, 3.0, Pro 1.0, Pro 1.1a AGP Compatibility For Sticklers AGP pinout AGP expansion slots AGP compatibility (with pictures) PCI Specifications Documents contains AGP specs. Universal Accelerated Graphics Port (UAGP) How Stuff Works - AGP A discussion from 2003 of what AGP aperture is, how it works, and how much memory should be allocated to it. Macintosh internals IBM PC compatibles Intel graphics Motherboard expansion slot Peripheral Component Interconnect
2382
https://en.wikipedia.org/wiki/Aalen
Aalen
Aalen () is a former Free Imperial City located in the eastern part of the German state of Baden-Württemberg, about east of Stuttgart and north of Ulm. It is the seat of the Ostalbkreis district and is its largest town. It is also the largest town in the Ostwürttemberg region. Since 1956, Aalen has had the status of Große Kreisstadt (major district town). It is noted for its many half-timbered houses constructed from the 16th century through the 18th century. With an area of 146.63 km2, Aalen is ranked 7th in Baden-Württemberg and 2nd within the Government Region of Stuttgart, after Stuttgart. With a population of about 66,000, Aalen is the 15th most-populated settlement in Baden-Württemberg. Geography Situation Aalen is situated on the upper reaches of the river Kocher, at the foot of the Swabian Jura which lies to the south and south-east, and close to the hilly landscapes of the Ellwangen Hills to the north and the Welland to the north-west. The west of Aalen's territory is on the foreland of the eastern Swabian Jura, and the north and north-west is on the Swabian-Franconian Forest, both being part of the Swabian Keuper-Lias Plains. The south-west is part of the Albuch, the east is part of the Härtsfeld, these two both being parts of the Swabian Jura. The Kocher enters the town's territory from Oberkochen to the south, crosses the district of Unterkochen, then enters the town centre, where the Aal flows into it. The Aal is a small river located only within the town's territory. Next, the Kocher crosses the district of Wasseralfingen, then leaves the town for Hüttlingen. Rivers originating near Aalen are the Rems (near Essingen, west of Aalen) and the Jagst (near Unterschneidheim, east of Aalen), both being tributaries of the Neckar, just like the Kocher. The elevation in the centre of the market square is relative to Normalhöhennull. The territory's lowest point is at the Lein river near Rodamsdörfle, the highest point is the Grünberg's peak near Unterkochen at . Geology Aalen's territory ranges over all lithostratigraphic groups of the South German Jurassic: Aalen's south and the Flexner massif are on top of the White Jurassic, the town centre is on the Brown Jurassic, and a part of Wasseralfingen is on the Black Jurassic. As a result, the town advertises itself as a "Geologist's Mecca". Most parts of the territory are on the Opalinuston-Formation (Opalinum Clay Formation) of the Aalenian subdivision of the Jurassic Period, which is named after Aalen. On the Sandberg, the Schnaitberg and the Schradenberg hills, all in the west of Aalen, the Eisensandstein (Iron Sandstone) formation emerges to the surface. On the other hills of the city, sands (Goldshöfer Sande), gravel and residual rubble prevail. The historic centre of Aalen and the other areas in the Kocher valley are founded completely on holocenic floodplain loam (Auelehm) and riverbed gravel that have filled in the valley. Most parts of Dewangen and Fachsenfeld are founded on formations of Jurensismergel (Jurensis Marl), Posidonienschiefer (cf. Posidonia Shale), Amaltheenton (Amalthean Clay), Numismalismergel (Numismalis Marl) and Obtususton (Obtusus Clay, named after Asteroceras obtusum ammonites) moving from south to north, all belonging to the Jurassic and being rich in fossils. They are at last followed by the Trossingen Formation already belonging to the Late Triassic. Until 1939 iron ore was mined on the Braunenberg hill. (see Tiefer Stollen section). Extent of the borough The maximum extent of the town's territory amounts to in a north–south dimension and in an east–west dimension. The area is , which includes 42.2% agriculturally used area and 37.7% of forest. 11.5% are built up or vacant, 6.4% is used by traffic infrastructure. Sporting and recreation grounds and parks comprise 1% , other areas 1.1% . Boroughs Aalen's territory consists of the town centre (Kernstadt) and the municipalities merged from between 1938 (Unterrombach) and 1975 (Wasseralfingen, see mergings section). The municipalities merged in the course of the latest municipal reform of the 1970s are also called Stadtbezirke (quarters or districts), and are Ortschaften ("settlements") in terms of Baden-Württemberg's Gemeindeordnung (municipal code), which means, each of them has its own council elected by its respective residents (Ortschaftsrat) and is presided by a spokesperson (Ortsvorsteher). The town centre itself and the merged former municipalities consist of numerous villages (Teilorte), mostly separated by open ground from each other and having their own independent and long-standing history. Some however have been created as planned communities, which were given proper names, but no well-defined borders. List of villages: Spatial planning Aalen forms a Mittelzentrum ("medium-level centre") within the Ostwürttemberg region. Its designated catchment area includes the following municipalities of the central and eastern Ostalbkreis district: Abtsgmünd, Bopfingen, Essingen, Hüttlingen, Kirchheim am Ries, Lauchheim, Neresheim, Oberkochen, Riesbürg and Westhausen, and is interwoven with the catchment area of Nördlingen, situated in Bavaria, east of Aalen. Climate As Aalen's territory sprawls on escarpments of the Swabian Jura, on the Albuch and the Härtsfeld landscapes, and its elevation has a range of , the climate varies from district to district. The weather station the following data originate from is located between the town centre and Wasseralfingen at about and has been in operation since 1991. The sunshine duration is about 1800 hours per year, which averages 4.93 hours per day. So Aalen is above the German average of 1550 hours per year. However, with 167 days of precipitation, Aalen's region also ranks above the German average of 138. The annual rainfall is , about the average within Baden-Württemberg. The annual mean temperature is . Here Aalen ranks above the German average of and the Baden-Württemberg average of . History Civic history First settlements Numerous remains of early civilization have been found in the area. Tools made of flint and traces of Mesolithic human settlement dated between the 8th and 5th millennium BC were found on several sites on the margins of the Kocher and Jagst valleys. On the Schloßbaufeld plateau (appr. ), situated behind Kocherburg castle near Unterkochen, a hill-top settlement was found, with the core being dated to the Bronze Age. In the Appenwang forest near Wasseralfingen, in Goldshöfe, and in Ebnat, tumuli of the Hallstatt culture were found. In Aalen and Wasseralfingen, gold and silver coins left by the Celts were found. The Celts were responsible for the fortifications in the Schloßbaufeld settlement consisting of sectional embankments and a stone wall. Also, Near Heisenberg (Wasseralfingen), a Celtic nemeton has been identified; however, it is no longer readily apparent. Roman era After abandoning the Alb Limes (a limes generally following the ridgeline of the Swabian Jura) around 150 AD, Aalen's territory became part of the Roman Empire, in direct vicinity of the then newly erected Rhaetian Limes. The Romans erected a castrum to house the cavalry unit Ala II Flavia milliaria; its remains are known today as Kastell Aalen ("Aalen Roman fort"). The site is west of today's town centre at the bottom of the Schillerhöhe hill. With about 1,000 horsemen and nearly as many grooms, it was the largest fort of auxiliaries along the Rhaetian Limes. There were Civilian settlements adjacent along the south and the east. Around 260 AD, the Romans gave up the fort as they withdrew their presence in unoccupied Germania back to the Rhine and Danube rivers, and the Alamanni took over the region. Based on 3rd- and 4th-century coins found, the civilian settlement continued to exist for the time being. However, there is no evidence of continued civilization between the Roman era and the Middle Ages. Foundation Based on discovery of alamannic graves, archaeologists have established the 7th century as the origination of Aalen. In the northern and western walls of St. John's church, which is located directly adjacent to the eastern gate of the Roman fort, Roman stones were incorporated. The building that exists today probably dates to the 9th century. The first mention of Aalen was in 839, when emperor Louis the Pious reportedly permitted the Fulda monastery to exchange land with the Hammerstadt village, then known as Hamarstat. Aalen itself was first mentioned in an inventory list of Ellwangen Abbey, dated ca. 1136, as the village Alon, along with a lower nobleman named Conrad of Aalen. This nobleman probably had his ancestral castle at a site south of today's town centre and was subject first to Ellwangen abbey, later to the House of Hohenstaufen, and eventually to the House of Oettingen. 1426 was the last time a member of that house was mentioned in connection with Aalen. Documents, from the Middle Ages, indicate that the town of Aalen was founded by the Hohenstaufen some time between 1241 and 1246, but at a different location than the earlier village, which was supposedly destroyed in 1388 during the war between the Alliance of Swabian Cities and the Dukes of Bavaria. Later, it is documented that the counts of Oettingen ruled the town in 1340. They are reported to have pawned the town to Count Eberhard II and subsequently to the House of Württemberg in 1358 or 1359 in exchange for an amount of money. Imperial City Designation as Imperial City During the war against Württemberg, Emperor Charles IV took the town without a fight after a siege. On 3 December 1360, he declared Aalen an Imperial City, that is, a city or town responsible only to the emperor, a status that made it a quasi-sovereign city-state and that it kept until 1803. In 1377, Aalen joined the Alliance of Swabian Cities, and in 1385, the term civitas appears in the town's seal for the first time. In 1398, Aalen was granted the right to hold markets, and in 1401 Aalen obtained proper jurisdiction. The oldest artistic representation of Aalen was made in 1528. It was made as the basis of a lawsuit between the town and the Counts of Oettingen at the Reichskammergericht in Speyer. It shows Aalen surrounded by walls, towers, and double moats. The layout of the moats, which had an embankment built between them, is recognizable by the present streets named Nördlicher, Östlicher, Südlicher and Westlicher Stadtgraben (Northern, Eastern, Southern and Western Moat respectively). The wall was about tall, 1518 single paces () long and enclosed an area of . During its early years, the town had two town gates: The Upper or Ellwangen Gate in the east, and St. Martin's gate in the south; however due to frequent floods, St. Martin's gate was bricked up in the 14th century and replaced by the Lower or Gmünd Gate built in the west before 1400. Later, several minor side gates were added. The central street market took place on the Wettegasse (today called Marktplatz, "market square") and the Reichsstädter Straße. So the market district stretched from one gate to the other, however in Aalen it was not straight, but with a 90-degree curve between southern (St. Martin's) gate and eastern (Ellwangen) gate. Around 1500, the civic graveyard was relocated from the town church to St. John's Church, and in 1514, the Vierundzwanziger ("Group of 24") was the first assembly constituted by the citizens. Reformation Delegated by Württemberg's Duke Louis III, on 28 June 1575, nearly 30 years after Martin Luther's death, Jakob Andreae, professor and chancellor of the University of Tübingen, arrived in Aalen. The sermon he gave the following day convinced the mayor, the council, and the citizens to adopt the Reformation in the town. Andreae stayed in Aalen for four weeks to help with the change. This brought along enormous changes, as the council forbade the Roman Catholic priests to celebrate masses and give sermons. However, after victories of the imperial armies at the beginning of the Thirty Years' War, the Prince-Provostry of Ellwangen, which still held the right of patronage in Aalen, were able to temporarily bring Catholicism back to Aalen; however after the military successes of the Protestant Union, Protestant church practices were instituted again. Fire of 1634 On the night of 5 September 1634, two ensigns of the army of Bernard of Saxe-Weimar who were fighting with the Swedes and retreating after the Battle of Nördlingen set fire to two powder carriages, to prevent the war material to fall into Croatian hands and to prevent their advance. The result was a conflagration, that some say destroyed portions of the town. There are differing stories regarding this fire. According to 17th-century accounts, the church and all the buildings, except of the Schwörturm tower, were casualties of the fire, and only nine families survived. 19th century research by Hermann Bauer, Lutheran pastor and local historian, discovered that the 17th-century account is exaggerated, but he does agree that the town church and buildings in a "rather large" semicircle around it were destroyed. The fire also destroyed the town archive housed in an addition to the church, with all of its documents. After the fire, soldiers of both armies went through the town looting. It took nearly 100 years for the town to reach its population of 2,000. French troops marched through Aalen in 1688 during the Nine Years' War; however, unlike other places, they left without leaving severe damages. The French came through again in 1702 during the War of the Spanish Succession and in 1741 during the War of the Austrian Succession, the latter also caused imperial troops to move through in 1743. The town church's tower collapsed in 1765, presumably because proper building techniques were not utilized during the reconstruction after the fire of 1634. The collapsing tower struck two children of the tower watchman who died of their injuries, and destroyed the nave, leaving only the altar cross intact. The remaining walls had to be knocked down due to the damage. Reconstruction began the same year, creating the building that exists today. On 22 November 1749, the so-called Aalen protocol regulating the cohabitation of Lutherans and Roman Catholics in the jointly ruled territory of Oberkochen was signed in Aalen by the Duchy of Württemberg and the Prince-Provostry of Ellwangen. Aalen had been chosen because of its neutral status as a Free Imperial City. Napoleonic era and end of the Imperial City of Aalen During the War of the First Coalition (1796), Aalen was looted. The War of the Second Coalition concluded in 1801 with the signing of the Treaty of Lunéville, which led to the German Mediatisation of 1803 that assigned most Imperial Cities to the neighbouring principalities. Aalen was assigned to the Electorate of Württemberg, which later became the Kingdom of Württemberg, and became seat of the District ("Oberamt") of Aalen. During the War of the Third Coalition, on 6 October 1805, Napoleon Bonaparte arrived in Aalen, with an army of 40,000. This event, along with Bavarian and Austrian troops moving in some days later, caused miseries that according to the town clerk "no feather could describe". In 1811, the municipality of Unterrombach was formed out of some villages previously belonging to Aalen, some to the Barons of Wöllwarth, and the eastern villages were assigned to the municipality of Unterkochen. In the age of the Napoleonic wars, the town walls were no longer of use, and in the 18th century, with the maintenance of walls, gates and towers becoming more neglected Finally, due to the fact that the funds were lacking, starting in 1800, most towers were demolished, the other buildings followed soon. Industrial revolution Before the industrial revolution, Aalen's economy was shaped by its rural setting. Many citizens were pursuing farming besides their craft, such as tanning. In the mid 19th century, there were twelve tanneries in Aalen, due to the proximity of Ulm, an important sales market. Other crafts that added to the economy were weaving mills, which produced linen and woolen goods, and baking of sweet pastry and gingerbread. In Aalen, industrialisation was a slow process. The first major increase was in the 1840s, when three factories for nails and some other factories emerged. It was the link with the railway network, by the opening of the Rems Railway from Cannstatt to Wasseralfingen in 1861, that brought more industry to Aalen, along with the royal steel mill (later Schwäbische Hüttenwerke) in Wasseralfingen. The Rems Railway's extension to Nördlingen in 1863, the opening of the Brenz Railway in 1864 and of the Upper Jagst Railway in 1866 turned Aalen into a railway hub. Furthermore, between 1901 and its shutdown in 1972, the Härtsfeld Railway connected Aalen with Dillingen an der Donau via Neresheim. Part of becoming a rail hub entailed more jobs based on the rail industry. These included, a maintenance facility, a roundhouse, an administrative office, two track maintenance shops, and a freight station with an industrial branch line. This helped shape Aalen into what today's historians call a "railwayman's town". Starting in 1866, the utilities in town all began to be upgraded. Starting with the Aalen gasworks which were opened and gas lighting was introduced. Then in 1870, a modern water supply system was started and in 1912 the mains electricity. Finally, in 1935, the first electrically powered street lights were installed. To fight housing shortage during and immediately after World War I, the town set up barracks settlement areas at the Schlauch and Alter Turnplatz grounds. In spite of the industry being crippled by the Great Depression of 1929, the public baths at the Hirschbach creek where modernized, extended and re-opened in 1931. Nazi era In the federal election of 1932, the Nazi Party performed below average in Aalen with 25.8% of votes compared to 33.1% on the national level, thus finishing second to the Centre Party which had 26.6% (11.9% nationwide) of the votes, and ahead of the Social Democratic Party of Germany with 19.8% (20.4%). However, the March 1933 federal elections showed that the sentiment had changed as the Nazi Party received 34.1% (still below German average 43.9% nationwide), but by far the leading vote-getter in Aalen, followed by the Centre party at 26.6% (11.3% nationwide) and the Social Democrats 18.6% (18.3% nationwide). The democratically elected mayor Friedrich Schwarz remained in office until the Nazis removed him from office, in 1934, and replaced him by chairman of the Nazi Party town council head and brewery owner Karl Barth. Karl Barth was a provisional mayor until the more permanent solution of Karl Schübel. In August 1934, the Nazi consumer fair Braune Messe ("brown fair") was held in Aalen. During Nazi rule in Germany, there were many military offices constructed in Aalen, starting with, in 1936, a military district riding and driving school for Wehrkreis V. The Nazis also built an army replenishment office (Heeresverpflegungsamt), a branch arsenal office (Heeresnebenzeugamt) and a branch army ammunitions institute (Heeresnebenmunitionsanstalt). Starting in 1935, mergers of neighbouring towns began. In 1938, the Oberamt was transformed into the Landkreis of Aalen and the municipality of Unterrombach was disbanded. Its territory was mostly added to Aalen, with the exception of Hammerstadt, which was added to the municipality of Dewangen. Forst, Rauental and Vogelsang were added to Essingen (in 1952 the entire former municipality of Unterrombach was merged into Aalen, with the exception of Forst, which is part of Essingen until present). In September 1944, the Wiesendorf concentration camp, a subcamp of Natzweiler-Struthof, was constructed nearby. It was designated for between 200 and 300 prisoners who were utilized for forced labor in industrial businesses nearby. Until the camp's dissolution in February 1945, 60 prisoners died. Between 1946 and 1957, the camp buildings were torn down; however, its foundations are still in place in house Moltkestraße 44/46. Also, there were several other labour camps which existed where prisoners of war along with women and men from occupied countries occupied by Germany were pooled. The prisoners at these other camps had to work for the arms industry in major businesses like Schwäbische Hüttenwerke and the Alfing Keßler machine factory. In the civic hospital, the deaconesses on duty were gradually replaced by National Socialist People's Welfare nurses. Nazi eugenics led to compulsory sterilization of some 200 persons there. Fortunately, Aalen avoided most of the combat activity during World War II. It was only during the last weeks of the war that Aalen became a target of air warfare, which led to the destruction and severe damage of parts of the town, the train station, and other railway installations. A series of air attacks lasting for more than three weeks reached its peak on 17 April 1945, when United States Army Air Forces planes bombed the branch arsenal office and the train station. During this raid, 59 people were killed, more than half of them buried by debris, and more than 500 lost their homes. Also, 33 residential buildings, 12 other buildings and 2 bridges were destroyed, and 163 buildings, including 2 churches, were damaged. Five days later, the Nazi rulers of Aalen were unseated by the US forces. Post-war era Aalen became part of the State of Baden-Württemberg, upon its creation in 1952. Then, with the Baden-Württemberg territorial reform of 1973, the District of Aalen was merged into the Ostalbkreis district. Subsequently, Aalen became seat of that district, and in 1975, the town's borough attained its present size (see below). The population of Aalen exceeded the limit of 20,000, which was the requirement for to gain the status of Große Kreisstadt ("major district town") in 1946. On 1 August 1947, Aalen was declared Unmittelbare Kreisstadt ("immediate district town"), and with the creation of the Gemeindeordnung (municipal code) of Baden-Württemberg on 1 April 1956, it was declared Große Kreisstadt. Religions On 31 December 2008, 51.1 percent of Aalen were members of the Catholic Church, 23.9 percent were members of the Evangelical-Lutheran Church. About 25 percent belong to other or no religious community or gave no information. The district of Waldhausen was the district with the highest percentage of Roman Catholic inhabitants at 75.6 percent, and the central district was the one with the highest percentage of Evangelical-Lutheran inhabitants at 25.6 percent, as well as those claiming no religious preference at 32.5 percent. Protestantism Aalen's population originally was subject to the jus patronatus of Ellwangen Abbey, and thus subject to the Roman Catholic Diocese of Augsburg. With the assistance of the Duke of Württemberg, in 1575, the reformation was implemented in Aalen. Subsequently, Aalen has been a predominantly Protestant town for centuries, with the exception of the years from 1628 until 1632 (see reformation section). Being an Imperial City, Aalen could govern its clerical matters on its own, so Clerics, organists and choir masters were direct subjects to the council, which thus exerted bishop-like power. There was even a proper hymn book for Aalen. After the transition to Württemberg, in 1803, Aalen became seat of a deanery, with the dean church being the Town Church (with the building constructed from 1765 to 1767 and existing until present). Another popular church is St. John's Church, located on the cemetery and refurbished in 1561. As Aalen's population grew in the 20th century, more parishes were founded: St. Mark's parish with its church building of 1967 and St. Martin's parish with its church of 1974. In the borough of Unterrombach, Aalen had implemented the reformation as well, but the community remained a chapel-of-ease of Aalen. A proper church, the Christ Church, was erected in 1912 and a proper parish was established in 1947. In Fachsenfeld, the ruling family of Woellwarth resp. of Leinroden implemented the reformation. A parish church was built in 1591, however with an influx of Catholics in the 18th century, a Catholic majority was established. The other districts of present-day Aalen remained mostly catholic after the reformation, however Wasseralfingen established a Lutheran parish in 1891 and a church, St. Magdalene's Church, in 1893. In Unterkochen, after World War II, a parish was established and a church was built in 1960. All four parishes belong to the deanery of Aalen within the Evangelical-Lutheran Church in Württemberg. Furthermore, in Aalen there are Old Pietistic communities. Catholicism The few Catholics of today's central district were covered by the parish of Unterkochen until the 19th century, a situation which continued for some years even after completion of St. Mary's Church in 1868, which was constructed by Georg Morlok. However, in 1872 Aalen got its proper parish again, and in 1913, a second Catholic church, Salvator's Church, was completed, and in 1969 the Holy Cross Church was also finished. In 1963, a second parish was set up, and in 1972 it got a new Church, the new St. Mary's Church, which has been erected in place of the old St. Mary's church, which had been torn down in 1968. Another church of the second parish was St. Augustine's Church, which was completed in 1970. Finally, in 1976 and 1988, St. Elizabeth's Church and St. Thomas' Church were completed. Furthermore, in 1963, the St. Michael pastoral care office was built. Hofherrnweiler has its own Catholic church, St. Boniface's, since 1904. The villages of Dewangen, Ebnat, Hofen, Waldhausen and Wasseralfingen had remained Catholic after reformation, so old parishes and churches persist there. The Assumption of Mary Church in Dewangen has an early Gothic tower and a newly built nave (1875). Mary's Immaculate Conception Church in Ebnat was constructed in 1723; however the church was first mentioned in 1298. Hofen's Saint George's Church is a fortified church, whose current nave was built between 1762 and 1775. Alongside the church, the Late Gothic St. Odile's Chapel is standing, whose entrance has the year 1462 engraved upon it. Foundations of prior buildings have been dated to the 11th and 13th century. St. Mary's Church of Unterkochen was first mentioned in 1248, and has served the Catholics of Aalen for a long time. Waldhausen's parish church of St. Nicholas was built between 1699 and 1716. Wasseralfingen at first was a chapel of ease for Hofen, but has since had its own chapel, St. Stephen, built. It was presumably built in 1353 and remodeled in 1832. In 1834, a proper parish was established, which built a new St. Stephen's Church. This new building utilized the Romanesque Revival architecture style and was built between 1881 and 1883, and has since remained the parish's landmark. Also, Fachsenfeld received its own church, named Sacred Heart in 1895. All Catholic parishes within Aalen are today incorporated into four pastoral care units within the Ostalb Deanery of the Diocese of Rottenburg-Stuttgart; however these units also comprise some parishes outside of Aalen. Pastoral Care Unit two comprises the parishes of Essingen, Dewangen and Fachsenfeld, unit four comprises Hofen and Wasseralfingen, unit five comprises both parishes of Aalen's centre and Hofherrnweiler, unit five comprises Waldhausen, Ebnat, Oberkochen and Unterkochen. Other Christian communities In addition to the two major religions within Aalen, there are also free churches and other communities, including the United Methodist Church, the Baptists, the Seventh-day Adventist Church and the New Apostolic Church. Other religions Until the late 19th century, no Jews were documented within Aalen. In 1886 there were four Jews were living in Aalen, a number that rose to ten in 1900, fell to seven in 1905, and remained so until 1925. Upon the Nazis' rise to power in 1933, seven Jews, including two children, lived in Aalen. During the Kristallnacht in 1938, the vitrines of the three Jewish shops in the town were smashed and their proprietors imprisoned for several weeks. After their release, most Aalen Jews emigrated. The last Jews of Aalen, Fanny Kahn, was forcibly resettled to Oberdorf am Ipf, which had a large Jewish community. Today, a street of Aalen is named after her. The Jew Max Pfeffer returned from Brussels to Aalen in 1948 to continue his shop, but emigrated to Italy in 1967. In Aalen, there is an Islamic Ditib community, which maintains the D.I.T.I.B. Mosque of Aalen (Central Mosque) located at Ulmer Straße. The mosque's construction started on 30 August 2008. The Islamist Millî Görüş organisation maintains the Fatih Mosque, as well at Ulmer Straße. Mergings The present-day make up of Aalen was created on 21 June 1975 by the unification of the cities of Aalen and Wasseralfingen, with the initial name of Aalen-Wasseralfingen. This annexation made Aalen's territory one third larger than its prior size. On 1 July 1975, the name Aalen was revived. Prior to this merger, the town of Aalen had already annexed the following municipalities: 1938: Unterrombach 1 January 1970: Waldhausen 1 July 1972: Ebnat 1 January 1973: Dewangen, Fachsenfeld (including the village of Hangendenbach, which was transferred from Abtsgmünd in 1954) and Unterkochen. The merging of Dewangen nearly doubled the territory of Aalen. Population's progression and structure During the Middle Ages and the early modern period, Aalen was just a small town with a few hundred inhabitants. The population grew slowly due to numerous wars, famines and epidemics. It was the beginning of the Industrial Revolution in the 19th century where Aalen's growth accelerated. Whereas in 1803, only 1,932 people inhabited the town, in 1905 it had already increased to 10,442. The number continued to rise and reached 15,890 in 1939. The influx of refugees and ethnic Germans from Germany's former eastern territories after World War II pushed the population to 31,814 in 1961. The merger with Wasseralfingen on 21 June 1975 added 14,597 persons and resulted in a total population of 65,165 people. On 30 June 2005, the population, which was officially determined by the Statistical Office of Baden-Württemberg, was 67,125. The following overview shows how the population figures of the borough were ascertained. Until 1823, the figures are mostly estimates, thereafter census results or official updates by the state statistical office. Starting in 1871, the figures were determined by non-uniform method of tabulation using extrapolation. ¹ Census result On 31 December 2008, Aalen had precisely 66,058 inhabitants, of which 33,579 were female and 32,479 were male. The average age of Aalen's inhabitants rose from 40.5 years in 2000 to 42.4 in 2008. Within the borough, 6,312 foreigners resided, which is 9.56 percent. Of them, the largest percentage are from Turkey (38 percent of all foreigners), the second largest group are from Italy (13 percent), followed by Croatians (6 percent) and Serbs (5 percent). The number of married residents fell from 32,948 in 1996 to 31,357 in 2007, while the number of divorced residents rose in the same period from 2,625 to 3,859. The number of single residents slightly increased between 1996 and 2004 from 25,902 to 26,268 and fell slightly until 2007 to 26,147. The number of widowed residents fell from 5,036 in 1996 to 4,783 in 2007. Politics Aalen has arranged a municipal association with Essingen and Hüttlingen. Council Since the local election of 25 May 2014, the town council consists of 51 representatives having a term of five years. The seats are distributed as follows on parties and groups (changes refer to the second last election of 2004): Mayors Since 1374, the mayor and the council maintain the government of the town. In the 16th century, the town had two, sometimes three mayors, and in 1552, the council had 13 members. Later, the head of the administration was reorganized several times. In the Württemberg era, the mayor's title was initially called Bürgermeister, then from 1819 it was Schultheiß, and since 1947 it is Oberbürgermeister. The mayor is elected for a term of eight years, and he is chairman and a voting member of the council. He has one deputy with the official title of Erster Bürgermeister ("first mayor") and one with the official title of Bürgermeister ("mayor"). Heads of town in Aalen since 1802 1802–: Theodor Betzler 1812–1819: Ludwig Hölder 1819–1829: Theodor Betzler 1829: Palm 1829–1848: Philipp Ehmann 1848–1873: Gustav Oesterlein 1873–1900: Julius Bausch 1900–1902: Paul Maier 1903–1934: Friedrich Schwarz 1935–1945: Karl Schübel (NSDAP) 1945–1950: Otto Balluff 1950–1975: Karl Schübel (independent) 1976–2005: Ulrich Pfeifle (SPD) 2005–2013: Martin Gerlach (independent) 2013–2021: Thilo Rentschler (SPD) 2021–: Frederick Brütting (SPD) Coat of arms and flag Aalen's coat of arms depicts a black eagle with a red tongue on golden background, having a red shield on its breast with a bent silver eel on it. Eagle and eel were first acknowledged as Aalen's heraldic animals in the seal of 1385, with the eagle representing the town's imperial immediacy. After the territorial reform, it was bestowed again by the Administrative District of Stuttgart on 16 November 1976. The coat of arms' blazon reads: "In gold, the black imperial eagle, with a red breast shield applied to it, therein a bent silver eel" (In Gold der schwarze Reichsadler, belegt mit einem roten Brustschild, darin ein gekrümmter silberner Aal). Aalen's flag is striped in red and white and contains the coat of arms. The origin of the town's name is uncertain. Matthäus Merian (1593–1650) presumed the name to originate from its location at the Kocher river, where "frequently eels are caught", while Aal is German for "eel". Other explanations point to Aalen as the garrison of an ala during the Roman empire, respectively to an abridgement of the Roman name "Aquileia" as a potential name of the Roman fort, a name that nearby Heidenheim an der Brenz bore as well. Another interpretation points to a Celtic word aa meaning "water". Godparenthood On the occasion of the 1980 Reichsstädter Tage, Aalen took over godparenthood for the more than 3000 ethnic Germans displaced from the Wischau linguistic enclave. 972 of them settled in Aalen in 1946. The "Wischau Linguistic Enclave Society" (Gemeinschaft Wischauer Sprachinsel) regularly organises commemorative meetings in Aalen. Their traditional costumes are stored in the Old Town Hall. Municipal finances According to the 2007 municipal poll by the Baden-Württemberg chapter of the German Taxpayers Federation, municipal tax revenues totalling to 54,755 million Euros (2006) resp. 62,148 million Euros (2007) face the following debts: 2006 total: 109.9 million Euros debts (64.639 million of the finance department and 48.508 million of the municipal enterprises and fund assets) 2007 total: 114.5 million Euros debts (69.448 million of the finance department and 45.052 million of the municipal enterprises and fund assets) Twin towns – sister cities Aalen is twinned with: Saint-Lô, France (1978) Christchurch, United Kingdom (1981) Tatabánya, Hungary (1987) Antakya, Turkey (1995); initiated by Ismail Demirtas, who emigrated in 1962 from Turkey to Aalen and was social adviser for foreign employees Cervia, Italy (2011) Vilankulo, Mozambique (2018) The "Twin Towns Society of Aalen" (Städtepartnerschaftsverein Aalen e. V.) promotes friendly relations between Aalen and its twin towns, which comprises mutual exchanges of sports and cultural clubs, schools and other civic institutions. On the occasion of the Reichsstädter Tage, from 11 until 13 September 2009 the first conference of twin towns was held. Culture and sights Theatre The Theater der Stadt Aalen theatre was founded in 1991 and stages 400 to 500 performances a year. Schubart Literary Award The town endowed the "Schubart Literary Award" (Schubart-Literaturpreis) in 1955 in tribute to Christian Friedrich Daniel Schubart, who spent his childhood and youth in Aalen. It is one of the earliest literary awards in Baden-Württemberg and is awarded biennially to German-language writers whose work coincide with Schubart's "liberal and enlightened reasoning". It is compensated with 12,000 Euros. Music Founded in 1958, the "Music School of the Town of Aalen" today has about 1,500 students taught by 27 music instructors in 30 subjects. In 1977, a symphony orchestra was founded in Aalen, which today is called Aalener Sinfonieorchester, and consists mostly of instructors and students of the music school. It performs three public concerts annually: The "New Year's Concert" in January, the "Symphony Concert" in July and a "Christmas Concert" in December. Beyond that, music festivals regularly take place in Aalen, like the Aalen Jazzfest. The Aalen volunteer fire department has had a marching band since 1952, whose roots date back to 1883. In 1959, the band received its first glockenspiel from TV host Peter Frankenfeld on the occasion of a TV appearance. A famous German rapper, designer and singer, that goes under the name of Cro, was born in Aalen and lived his early years here. Arts The Kunstverein Aalen was founded in 1983 as a non-profit art association and today is located in the Old Town Hall. The institution with more than 400 members focuses on solo and group exhibitions by international artists. It belongs to the Arbeitsgemeinschaft Deutscher Kunstvereine (ADKV), an umbrella organization for non-profit art associations. Museums and memorial sites Museums In the central district of Aalen, there are two museums: The "Aalen Limes Museum" (Limesmuseum Aalen) is located at the place of the largest Roman cavalry fort north of the Alps until about 200 AD. It opened in 1964. The museum exhibits numerous objects from the Roman era. The ruins of the cavalry fort located beside the museum is open to museum visitors. Every other year, a Roman festival is held in the area of the museum (see below). In the Geological-Paleontological Museum located in the historic town hall, there are more than 1500 fossils from the Swabian Jura, including ammonites, ichthyosaurs and corals, displayed. In the Waldhausen district the Heimatstüble museum of local history has an exhibition on agriculture and rural living. In the Wasseralfingen district, there are two more museums: The Museum Wasseralfingen comprises a local history exhibition and an art gallery including works of Hermann Plock, Helmut Schuster and Sieger Köder. Also, the stove plate collection of the Schwäbische Hüttenwerke steel mill is exhibited, with artists, modellers and the production sequence of a cast plate from design to final product being presented. Memorial sites There is memorial stone at the Schillerlinde tree above Wasseralfingen's ore pit dedicated to four prisoners of the subcamp of Natzweiler-Struthof concentration camp killed there. Also in Wasseralfingen, in the cemetery a memorial with the Polish inscription "To the victims of Hitler" which commemorates the deceased forced labourers buried there. In 1954, on the Schillerhöhe hill the town erected a bell tower as a memorial to Aalen's victims of both world wars and to the displacement of ethnic Germans. The tower was planned by Emil Leo, the bell was endowed by Carl Schneider. The tower is open on request. Every evening at 18:45 (before 2003: at 19:45), the memorial's bell rings. Buildings Churches The town centre is dominated by the Evangelical-Lutheran St. Nicholas' Church in the heart of the pedestrian area. The church, in its present shape being built between 1765 and 1767, is the only major Late Baroque building in Aalen and is the main church of the Evangelical-Lutheran parish of Aalen. St. John's Church is located inside of St. John's cemetery in the western centre. The building presumably is from the 9th century and thus is one of Württemberg's oldest existing churches. The interior features frescos from the early 13th century. For other churches in Aalen, see the Religions section. Historic Town Hall with "Spy" The Historic Town Hall was originally built in the 14th century. After the fire of 1634, it was re-constructed in 1636. This building received a clock from Lauterburg, and the Imperial City of Nuremberg donated a Carillon. It features a figurine of the "Spy of Aalen" and historically displayed other figurines, however the latter ones were lost by a fire in 1884. Since then, the Spy resides inside the reconstructed tower and has become a symbol of the town. The building was used as the town hall until 1907. Since 1977, the Geological-Paleontological Museum resides in the Historic Town Hall. According to legend, the citizens of Aalen owe the "Spy of Aalen" (Spion von Aalen) their town having been spared from destruction by the emperor's army: The Imperial City of Aalen was once in quarrel with the emperor, and his army was shortly before the gates to take the town. The people of Aalen got scared and thus dispatched their "most cunning" one out into the enemy's camp to spy out the strength of their troops. Without any digression, he went straight into the middle of the enemy camp, which inescapably led to him being seized and presented to the emperor. When the emperor asked him what he had lost here, he answered in Swabian German: "Don't frighten, high lords, I just want to peek how many cannons and other war things you've got, since I am the spy of Aalen". The emperor laughed upon such a blatancy and acted naïvety, steered him all through the camp and then sent him back home. Soon the emperor withdrew with his army as he thought a town such wise guys reside in deserved being spared. Old Town Hall The earliest record of the Old Town Hall was in 1575. Its outside wall features the oldest known coat of arms, which is of 1664. Until 1851, the building also housed the Krone-Post hotel, which coincided with being a station of the Thurn und Taxis postal company. It has housed many notable persons. Thus the so-called "Napoleon Window" with its "N" painted on reminds of the stay of French emperor Napoleon Bonaparte in 1805. According to legend, he rammed his head so hard it bled on this window, when he was startled by the noise of his soldiers ridiculing the "Spy of Aalen". The building was used as Aalen's town hall from 1907 until 1975. Today it houses a cabaret café and the stage of the Theatre of the Town of Aalen. The town has adopted the Wischau Linguistic Enclave Society due to their godparenthood and stores their traditional costumes in the building. Bürgerspital The Bürgerspital ("Civic Asylum") is a timber-frame house erected on Spritzenhausplatz ("Fire Engine House Square") in 1702. Until 1873, it was used as civic hospital, then, later as a retirement home. After a comprehensive renovation in 1980 it was turned into a senior citizen's community centre. Limes-Thermen On a slope of the Langert mountain, south of the town, the Limes-Thermen ("Limes Thermae") hot springs are located. They were built in ancient Roman style and opened in 1985. The health spa is supplied with water about . Market square The market square is the historic hub of Aalen and runs along about from the town hall in the south to the Historic Town Hall and the Old Town Hall in the north, where it empties into Radgasse alley. Since 1809, it is site of the weekly market on Wednesday and Saturday. About in front of the Reichsstädter Brunnen fountain at the town hall, the coats of arms of Aalen, its twinned cities and of the Wischau linguistic enclave are paved into the street as mosaic. Market fountain In 1705, for the water supply of Aalen a well casing was erected at the northern point of the market square, in front of the Historic Town Hall. It was a present of duke Eberhard Louis. The fountain bore a statue of emperor Joseph I., who was enthroned in 1705 and in 1707 renewed Aalen's Imperial City privileges. The fountain was supplied via a wooden pipe. Excessive water was dissipated through ditches branched from Kocher river. When in the early 1870s Aalen's water network was constructed, the fountain was replaced by a smaller fountain about distant. In 1975, the old market fountain was re-erected in baroque style. It bears a replica of the emperor's statue, with the original statue exhibited in the new town hall's lobby. The cast iron casing plates depict the 1718 coat of arms of the Duchy of Württemberg and the coats of arms of Aalen and of the merged municipalities. Reichsstädter Brunnen The Reichsstädter Brunnen fountain ("Imperial Civic Fountain") is located in front of the town hall at the southern point of the market square. It was created by sculptor Fritz Nuss in 1977 to commemorate Aalen's time as an Imperial City (1360–1803). On its circumference is a frieze showing bronze figurines illustrating the town's history. Radgasse The Radgasse ("Wheel Alley") features Aalen's oldest façade. Originally a small pond was on its side. The buildings were erected between 1659 and 1662 for peasants with citizenry privileges and renovated in the mid-1980s. The namesake for the alley was the "Wheel" tavern, which was to be found at the site of today's address Radgasse 15. Tiefer Stollen The former iron ore pit Wilhelm at Braunenberg hill was converted into the Tiefer Stollen tourist mine in order to remind of the old-day miners' efforts and to maintain it as a memorial of early industrialisation in the Aalen area. It has a mining museum open for visitors, and a mine railway takes visitors deep into the mountain. The Town of Aalen, a sponsorship association, and many citizens volunteered several thousand hours of labour to put the mine into its current state. As far as possible, things were left in the original state. In 1989, a sanitary gallery was established where respiratory diseases are treated within rest cures. Thus the Aalen village of Röthard, where the gallery is located, was awarded the title of "Place with sanitary gallery service" in 2004. Observatory The Aalen Observatory was built in 1969 as school observatory for the Schubart Gymnasium. In 2001, it was converted to a public observatory. Since then, it has been managed by the Astronomische Arbeitsgemeinschaft Aalen ("Aalen Astronomical Society"). It is located on Schillerhöhe hill and features two refractive telescopes. They were manufactured by Carl Zeiss AG which has its headquarters in nearby Oberkochen and operates a manufacturing works in Aalen (see below). In the observatory, guided tours and lectures are held regularly. Windpark Waldhausen The Windpark Waldhausen wind farm began operations in early 2007. It consists of seven REpower MM92 wind turbines with a nameplate capacity of 2 MW each. The hub height of each wind turbine is , with a rotor diameter of . Aalbäumle observation tower The tall Aalbäumle observation tower is built atop Langert mountain. This popular hiking destination was built in 1898 and was remodelled in 1992. It features a good view over Aalen and the Welland region, up to the Rosenstein mountain and Ellwangen. Beneath the tower, an adventure playground and a cabin is located. A flag on the tower signals whether the cabin's restaurant is open. Natural monuments The Baden-Württemberg State Institute for Environment, Measurements and Natural Conservation has laid out six protected landscapes in Aalen (the Swabian Jura escarpment between Lautern and Aalen with adjacent territories, the Swabian Jura escarpment between Unterkochen and Baiershofen, the Hilllands around Hofen, the Kugeltal and Ebnater Tal valleys with parts of Heiligental valley and adjacent territories, Laubachtal valley and Lower Lein Valley with side valleys), two sanctuary forests (Glashütte and Kocher Origin), 65 extensive natural monuments, 30 individual natural monuments and the following two protected areas: The large Dellenhäule protected area between Aalen's Waldhausen district and Neresheim's Elchingen district, created in 1969, is a sheep pasture with juniper and wood pasture of old willow oaks. The large Goldshöfer Sande protected area was established in 2000 and is situated between Aalen's Hofen district and Hüttlingen. The sands on the hill originated from the Early Pleistocene are of geological importance, and the various grove structures offer habitat to severely endangered bird species. Sports The football team, VfR Aalen, was founded in 1921 and played in the 2nd German League between 2012 and 2015, after which they were relegated to 3. Liga. Its playing venue is the Scholz-Arena situated in the west of the town, which bore the name Städtisches Waldstadion Aalen ("Civic Forest Stadium of Aalen") until 2008. From 1939 until 1945, the VfR played in the Gauliga Württemberg, then one of several parallel top-ranking soccer leagues of Germany. The KSV Aalen wrestles in the Wrestling Federal League. It was German champion in team wrestling in 2010. Its predecessor, the KSV Germania Aalen disbanded in 2005, was German champion eight times and runner-up five times since 1976. Another Aalen club, the TSV Dewangen, wrestled in the Federal League until 2009. Two American sports, American Football and Baseball, are pursued by the MTV Aalen. Volleyball has been gaining in popularity in Aalen for years. The first men's team of DJK Aalen accomplished qualification for regional league in the season of 2008/09. The Ostalb ski lifts are located south of the town centre, at the northern slope of the Swabian Jura. The skiing area comprises two platter lifts that have a vertical rise of , with two runs with lengths of and a beginners' run. Regular events Reichsstädter Tage Since 1975, Reichsstädter Tage ("Imperial City days") festival is held annually in the town centre on the second weekend in September. It is deemed the largest festival of the Ostwürttemberg region, and is associated with a shopping Sunday in accordance with the code. The festival is also attended by delegations from the twinned cities. On the town hall square, on Sunday an ecumenical service is held. Roman Festival The international Roman Festival (Römertage) are held biannially on the site of the former Roman fort and the modern Limes museum. The festival's ninth event in 2008 was attended by around 11,000 people. Aalen Jazz Festival Annually during the second week of November, the Aalen Jazz Festival brings known and unknown artists to Aalen. It has already featured musicians like Miles Davis, B. B. King, Ray Charles, David Murray, McCoy Tyner, Al Jarreau, Esbjörn Svensson and Albert Mangelsdorff. The festival is complemented by individual concerts in spring and summer, and, including the individual concerts, comprises around 25 concerts with a total of about 13,000 visitors. Economy and infrastructure In 2008 there were 30,008 employees liable to social insurance living in Aalen. 13,946 (46.5 percent) were employed in the manufacturing sector, 4,715 (15.7 percent) in commerce, catering, hotels and transport, and 11,306 (37.7 percent) in other services. Annually 16,000 employees commute to work, with about 9,000 living in the town and commuting out. Altogether in Aalen there are about 4,700 business enterprises, 1,100 of them being registered in the trade register. The others comprise 2,865 small enterprises and 701 craft enterprises. In Aalen, metalworking is the predominant industry, along with machine-building. Other industries include optics, paper, information technology, chemicals, textiles, medical instruments, pharmaceuticals, and food. Notable enterprises include SHW Automotive (originating from the former Schwäbische Hüttenwerke steel mills and a mill of 1671 in Wasseralfingen), the Alfing Kessler engineering works, the precision tools manufacturer MAPAL Dr. Kress, the snow chain manufacturer RUD Ketten Rieger & Dietz and its subsidiary Erlau, the Gesenkschmiede Schneider forging die smithery, the SDZ Druck und Medien media company, the Papierfabrik Palm paper mill, the alarm system manufacturer Telenot, the laser show provider LOBO electronic and the textile finisher Lindenfarb, which all have their seat in Aalen. A branch in Aalen is maintained by optical systems manufacturer Carl Zeiss headquartered in nearby Oberkochen. Transport Rail Aalen station is a regional railway hub on the Stuttgart-Bad Cannstatt–Nördlingen railway from Stuttgart and , the Aalen–Ulm railway from Ulm and the Goldshöfe–Crailsheim railway to Crailsheim. Until 1972, the Härtsfeld Railway connected Aalen with Dillingen an der Donau via Neresheim. Other railway stations within the town limits are Hofen (b Aalen), Unterkochen, Wasseralfingen and Goldshöfe station. The Aalen-Erlau stop situated in the south is no longer operational. Aalen station is served at two-hour intervals by trains of Intercity line 61 Karlsruhe–Stuttgart–Aalen–Nuremberg. For regional rail travel, Aalen is served by various lines of the Interregio-Express, Regional-Express and Regionalbahn categories. Since the beginning of 2019, the British company Go-Ahead took over the regional railway business of DB Regio in the region surrounding Aalen. The town also operates the Aalen industrial railway (Industriebahn Aalen), which carries about 250 carloads per year. Bus Aalen also is a regional hub in the bus network of OstalbMobil, the transport network of the district Aalen is in. The bus lines are operated and serviced by regional companies like OVA and RBS RegioBus Stuttgart. Street The junctions of Aalen/Westhausen and Aalen/Oberkochen connect Aalen with the Autobahn A7 (Würzburg–Füssen). Federal roads (Bundesstraßen) connecting with Aalen are B 19 (Würzburg–Ulm), B 29 (Waiblingen–Nördlingen) and B 290 (Tauberbischofsheim–Westhausen). The Schwäbische Dichterstraße ("Swabian Poets' Route") tourist route established in 1977/78 leads through Aalen. Several bus lines operate within the borough. The Omnibus-Verkehr Aalen company is one of the few in Germany that use double-decker buses, it has done so since 1966. A district-wide fare system, OstalbMobil, has been in effect since 2007. Air transport Stuttgart Airport, offering international connections, is about away, the travel time by train is about 100 Minutes. At Aalen-Heidenheim Airport, located south-east of Aalen, small aircraft are permitted. Gliding airfields nearby are in Heubach and Bartholomä. Bicycle Bicycle routes stretching through Aalen are the Deutscher Limes-Radweg ("German Limes Bicycle Route") and the Kocher-Jagst Bicycle Route. Public facilities Aalen houses an Amtsgericht (local district court), chambers of the Stuttgart Labour Court, a notary's office, a tax office and an employment agency. It is the seat of the Ostalbkreis district office, of the Aalen Deanery of the Evangelical-Lutheran Church and of the Ostalb deanery of the Roman Catholic Diocese of Rottenburg-Stuttgart. The Stuttgart administrative court, the Stuttgart Labour Court and the Ulm Social Welfare Court are in charge for Aalen. Aalen had a civic hospital, which resided in the Bürgerspital building until 1873, then in a building at Alte Heidenheimer Straße. In 1942, the hospital was taken over by the district. The district hospital at the present site of Kälblesrain, known today as Ostalb-Klinikum, was opened in 1955. Media The first local newspaper, Der Bote von Aalen ("The Herald of Aalen"), has been published on Wednesdays and Saturdays since 1837. Currently, local newspapers published in Aalen are the Schwäbische Post, which obtains its supra-regional pages from the Ulm-based Südwestpresse, and the Aalener Nachrichten (erstwhile Aalener Volkszeitung), a local edition of Schwäbische Zeitung in Leutkirch im Allgäu. Two of Germany's biggest Lesezirkels (magazine rental services) are headquartered in Aalen: Brabandt LZ Plus Media and Lesezirkel Portal. Regional event magazines are Xaver, åla, ålakultur. The commercial broadcasters Radio Ton and Radio 7 have studios in Aalen. Education A Latin school was first recorded in Aalen in 1447; it was remodeled in 1616 and also later in various buildings that were all situated near the town church, and continued up through the 19th century. In the course of the reformation, a "German school" was established in tandem, being a predecessor of the latter Volksschule school type. In 1860, the Ritterschule was built as a Volksschule for girls; the building today houses the Pestalozzischule. In 1866, a new building was erected for the Latin school and for the Realschule established in 1840. This building, later known as the Alte Gewerbeschule, was torn down in 1975 to free up land for the new town hall. In 1912, the Parkschule building was opened. It was designed by Paul Bonatz and today houses the Schubart-Gymnasium. The biggest educational institution in the town is the Hochschule Aalen, which was founded in 1962 and focuses on engineering and economics. It is attended by 5000 students on five campuses and employs 129 professors and 130 other lecturers. The town provides three Gymnasiums, four Realschulen, two Förderschulen (special schools), six combined Grundschulen and Hauptschulen and eight standalone Grundschulen. The Ostalbkreis district provides three vocational schools and three additional special schools. Finally, six non-state schools of various types exist. The German Esperanto Library (German: Deutsche Esperanto-Bibliothek, Esperanto: Germana Esperanto-Biblioteko) has been located in the building of the town library since 1989. TV and radio transmission tower The Südwestrundfunk broadcasting company operates the Aalen transmission tower on the Braunenberg hill. The tower was erected in 1956, it is tall and made of reinforced concrete. Things named after Aalen The following vehicles are named "Aalen": The Lufthansa Boeing 737-500 D-ABJF The Deutsche Bahn ICE 3 Tz309 (since 2 June 2008) Notable people Honorary citizens Ruland Ayßlinger, composer Erwin Rommel (1891–1944), Field Marshal of World War II, grew up in Aalen Paul Edel Wilhelm Jakob Schweiker (1859–1927), founder of the Aalen Historical Society (Geschichts- und Altertumsverein Aalen) and name giver of the Wilhelm Jakob Schweiker Award Ulrich Pfeifle, Mayor of Aalen from 1976 until 2005 Persons born in Aalen Johann Christoph von Westerstetten (1563–1637), Bishop of Eichstätt and counter-reformer Karl Joseph von Hefele (1809–1893), Roman Catholic theologian, clerical historian and bishop Karl Wahl (1892–1981), Gauleiter of Swabia, Obergruppenführer Kurt Jooss (1901–1979), born in Wasseralfingen; dancer, choreographer and dance educator August Zehender (1903–1945), SS Brigade Commander and Major General of the Waffen-SS Paul Buck (1911–2006), piano teacher Bruno Heck (1917–1989), politician of the CDU, former minister of the federal government and CDU secretary general Hermann Bausinger (1926-2021), cultural scientist Alfred Bachofer (born 1942), former Lord Mayor of Nürtingen Walter Adams (born 1945 in Wasseralfingen), middle-distance runner Ivo Holzinger (born 1948), politician (SPD), Lord Mayor of Memmingen (since 1980) Werner Sobek (born 1953), architect and structural engineer Ludwig Leinhos (born 1956), major general of the Bundesluftwaffe Bernd Hitzler (born 1957), politician, (CDU), Member of Landtag Martin Gerlach (born 1965), independent politician, mayor of Aalen (2005-2013) Thomas Zander (born 1967), wrestler, winner of Olympic silver medal and world champion (1994) Carl-Uwe Steeb (born 1967), retired tennis player Katrin Bauerfeind (born 1982), radio and TV-presenter Manuel Fischer (born 1989), footballer Patrick Funk (born 1990), footballer Cro (born 1990), Carlo Waibel, singer Other Christian Friedrich Daniel Schubart (1739–1791), poet, organ player, composer and journalist; lived in Aalen as a child and adolescent Rudolf Duala Manga Bell (1873–1914), King of Duala and resistance leader in the German colony of Kamerun, lived in Aalen from 1891 until 1896. Georg Elser (1903–1945), opponent of Nazism, worked in 1923 as an apprentice carpenter in Aalen. Werner Bickelhaupt (born 1939), football coach, lives in Aalen since 2004. Gerhard Thiele (born 1953 in Heidenheim), physicist and former astronaut, attended school in Aalen. Andreas Beck (born 1987 in Kemerovo/Soviet Union), German footballer, grew up in Aalen. Notes References Further reading External links Town of Aalen's website Geographical information system of the town of Aalen (in German) Towns in Baden-Württemberg Ostalbkreis 150s establishments in the Roman Empire 260s disestablishments in the Roman Empire Populated places established in the 7th century 7th-century establishments in Germany States and territories established in 1360 1360s establishments in the Holy Roman Empire 1360 establishments in Europe States and territories disestablished in the 1800s 1803 disestablishments in the Holy Roman Empire Free imperial cities Württemberg Holocaust locations in Germany
2386
https://en.wikipedia.org/wiki/American%20Airlines
American Airlines
American Airlines is a major US-based airline headquartered in Fort Worth, Texas, within the Dallas–Fort Worth metroplex. It is the largest airline in the world when measured by scheduled passengers carried and revenue passenger mile. American, together with its regional partners and affiliates, operates an extensive international and domestic network with almost 6,800 flights per day to nearly 350 destinations in 48 countries. American Airlines is a founding member of the Oneworld alliance. Regional service is operated by independent and subsidiary carriers under the brand name American Eagle. American Airlines and American Eagle operate out of 10 hubs, with Dallas/Fort Worth International Airport (DFW) being its largest. The airline handles more than 200 million passengers annually with an average of more than 500,000 passengers daily. As of 2022, the company employs 129,700 staff members. History American Airlines was started in 1930 via a union of more than eighty small airlines. The two organizations from which American Airlines was originated were Robertson Aircraft Corporation and Colonial Air Transport. The former was first created in Missouri in 1921, with both being merged in 1929 into holding company The Aviation Corporation. This, in turn, was made in 1930 into an operating company and rebranded as American Airways. In 1934, when new laws and attrition of mail contracts forced many airlines to reorganize, the corporation redid its routes into a connected system and was renamed American Airlines. The airline fully developed its international business between 1970 and 2000. It purchased Trans World Airlines in 2001. American had a direct role in the development of the Douglas DC-3, which resulted from a marathon telephone call from American Airlines CEO C. R. Smith to Douglas Aircraft Company founder Donald Wills Douglas Sr., when Smith persuaded a reluctant Douglas to design a sleeper aircraft based on the DC-2 to replace American's Curtiss Condor II biplanes. (The existing DC-2's cabin was wide, too narrow for side-by-side berths.) Douglas agreed to go ahead with development only after Smith informed him of American's intention to purchase 20 aircraft. The prototype DST (Douglas Sleeper Transport) first flew on December 17, 1935, the 32nd anniversary of the Wright Brothers' flight at Kitty Hawk. Its cabin was wide, and a version with 21 seats instead of the 14–16 sleeping berths of the DST was given the designation DC-3. There was no prototype DC-3; the first DC-3 built followed seven DSTs off the production line and was delivered to American Airlines. American Airlines inaugurated passenger service on June 26, 1936, with simultaneous flights from Newark, New Jersey, and Chicago, Illinois. American also had a direct role in the development of the DC-10, which resulted from a specification from American Airlines to manufacturers in 1966 to offer a widebody aircraft that was smaller than the Boeing 747, but capable of flying similar long-range routes from airports with shorter runways. McDonnell Douglas responded with the DC-10 trijet shortly after the two companies' merger. On February 19, 1968, the president of American Airlines, George A. Spater, and James S. McDonnell of McDonnell Douglas announced American's intention to acquire the DC-10. American Airlines ordered 25 DC-10s in its first order. The DC-10 made its first flight on August 29, 1970, and received its type certificate from the FAA on July 29, 1971. On August 5, 1971, the DC-10 entered commercial service with American Airlines on a round trip flight between Los Angeles and Chicago. In 2011, due to a downturn in the airline industry, American Airlines' parent company, the AMR Corporation, filed for bankruptcy protection. In 2013, American Airlines merged with US Airways but kept the American Airlines name, as it was the better-recognized brand internationally; the combination of the two airlines resulted in the creation of the largest airline in the United States, and ultimately the world. Destinations and hubs Destinations As of July 2022, American Airlines flies to 269 domestic destinations and 81 international destinations in 58 countries (as of August 2022) in five continents. Hubs American currently operates ten hubs. Charlotte – American's hub for the southeastern United States and secondary Caribbean gateway. Its operations in Concourse E are the largest regional flight operation in the world. American has about 91% of the market share at CLT, making it the largest carrier at the airport. It is a former US Airways hub. Chicago–O'Hare – American's hub for the Midwest. American has about 35% of the market share at O'Hare, making it the airport's second largest airline after United. Dallas/Fort Worth – American's hub for the southern United States and largest hub overall. American currently has about 87% of the market share at DFW, making it the largest carrier at the airport. American's corporate headquarters are also in Fort Worth near the airport. DFW serves as American's primary Transpacific hub, primary gateway to Mexico and its secondary gateway to Latin America. Los Angeles – American's hub for the West Coast and secondary transpacific gateway. Though American has increasingly reduced its network out of Los Angeles, citing many long-haul international routes as unprofitable, it still maintains a handful of transatlantic and transpacific flights. Miami – American's primary Latin American and Caribbean hub. American has about 68% of the market share in Miami, making it the largest airline at the airport. New York–JFK – American's primary transatlantic hub, while including other select flights to South America and Asia. Mostly serves destinations with a lot of business traffic. American has about 12% of the market share at JFK, making it the third largest carrier at the airport behind Delta and JetBlue. New York–LaGuardia – American's second New York hub. Philadelphia – American's primary Northeast domestic hub and secondary transatlantic hub, primarily for London, Paris, and leisure destinations in Western and Southern Europe. American has about 70% of the market share at PHL, making it the airport's largest airline. Another former US Airways hub. Phoenix–Sky Harbor – American's Rocky Mountain hub. Currently American has about 33% of the market share at PHX, making it the airport's second-largest airline. Former US Airways hub. Washington–Reagan – American's hub for the capital of the United States. American has about 49% of the market share at DCA, making it the largest carrier at the airport. Former US Airways hub. Alliance and codeshare agreements American Airlines is a member of the Oneworld alliance and has codeshares with the following airlines: Aer Lingus Air Tahiti Nui Alaska Airlines Cape Air Cathay Pacific China Southern Airlines El Al Fiji Airways Gol Transportes Aéreos Hawaiian Airlines IndiGo JetSmart Jetstar Jetstar Japan Level Korean Air Malaysia Airlines Qatar Airways Royal Air Maroc Royal Jordanian Seaborne Airlines Silver Airways SriLankan Airlines Vueling Joint ventures In addition to the above codeshares, American Airlines has entered into joint ventures with the following airlines: British Airways Finnair Iberia Japan Airlines Qantas Fleet As of January 2023, American Airlines operates the largest commercial fleet in the world, comprising 933 aircraft from both Boeing and Airbus, with an additional 161 planned or on order. Over 80% of American's aircraft are narrow-bodies, mainly Airbus A320 series and the Boeing 737-800. It is the largest A320 series aircraft operator in the world, as well as the largest operator of the A319 and A321 variants. It is the fourth-largest operator of 737 family aircraft and second-largest operator of the 737-800 variant. American's wide-body aircraft are all Boeing airliners. It is the third-largest operator of the Boeing 787 series and the sixth-largest operator of the Boeing 777 series. American exclusively ordered Boeing aircraft throughout the 2000s. This strategy shifted on July 20, 2011, when American announced the largest combined aircraft order in history for 460 narrow-body jets including 260 aircraft from the Airbus A320 series. Additional Airbus aircraft joined the fleet in 2013 during the US Airways merger, which operated a nearly all Airbus fleet. On August 16, 2022, American announced that a deal had been confirmed with Boom Supersonic to purchase at least 20 of their Overture supersonic airliners and potentially up to 60 in total. American Airlines operates aircraft maintenance and repair bases at the Charlotte, Chicago O'Hare, Dallas–Fort Worth, Pittsburgh, and Tulsa airports. Only American's widebody planes and its specially-configured Airbus A321T feature seatback entertainment. All other A321 and all Boeing 737 planes were retrofitted with their "Oasis" configuration. While this configuration adds larger overhead bins, it also added more seats, reduced legroom and seat padding, and removed seatback entertainment, which has drawn ire from some travelers. Cabins Flagship First Flagship First is American's international and transcontinental first class product. It is offered only on Boeing 777-300ERs and select Airbus A321s which American designates "A321T". The seats are fully lie-flat and offer direct aisle access with only one on each side of the aisle in each row. As with the airline's other premium cabins, Flagship First offers wider food and beverage options, larger seats, and lounge access at certain airports. American offers domestic Flagship First service on transcontinental routes between New York–JFK and Los Angeles, New York–JFK and San Francisco, New York-JFK and Santa Ana, Boston and Los Angeles, and Miami and Los Angeles, as well as on the standard domestic route between New York-JFK and Boston. The airline will debut new Flagship Suite® premium seats and a revamped aircraft interior for its long-haul fleet with fresh deliveries of its Airbus A321XLR and Boeing 787-9 aircraft, beginning in 2024. Flagship Business Flagship Business is American's international and transcontinental business class product. It is offered on all Boeing 777-200ERs, Boeing 777-300ERs, Boeing 787-8s, and Boeing 787-9s, as well as select Airbus A321s. All Flagship Business seats are fully lie-flat. The amenities in Flagship Business include complimentary alcoholic/non-alcoholic beverages, multi-course meals, and lounge access. Domestic first class First class is offered on all domestically configured aircraft. Seats range from in width and have of pitch. Dining options include complementary alcoholic and non-alcoholic beverages on all flights as well as standard economy snack offerings, enhanced snack basket selections on flights over , and meals on flights or longer. Premium Economy is American's economy plus product. It is offered on all widebody aircraft. The cabin debuted on the airline's Boeing 787-9s in late 2016 and is also available on Boeing 777-200s and -300s, and Boeing 787-8s. Premium Economy seats are wider than seats in the main cabin (American's economy cabin) and provide more amenities: Premium Economy customers get two free checked bags, priority boarding, and enhanced food and drink service including free alcohol. This product made American Airlines the first U.S. carrier to offer a four-cabin aircraft. Main Cabin Extra is American's enhanced economy product. It is available on all of the mainline fleet and American Eagle aircraft. Main Cabin Extra seats include greater pitch than is available in main cabin, along with free alcoholic beverages and boarding one group ahead of main cabin. American retained Main Cabin Extra when the new Premium Economy product entered service in late 2016. Main Cabin Main Cabin (economy class) is American's economy product and is found on all mainline and regional aircraft in its fleet. Seats range from in width and have of pitch. American markets a number of rows within the main cabin immediately behind Main Cabin Extra as "Main Cabin Preferred", which require an extra charge to select for those without status. American Airlines marketed increased legroom in economy class as "More Room Throughout Coach", also referred to as "MRTC", starting in February 2000. Two rows of economy class seats were removed on domestic narrowbody aircraft, resulting in more than half of all standard economy seats having a pitch of or more. Amid financial losses, this scheme was discontinued in 2004. On many routes, American also offers Basic Economy, the airline's lowest main cabin fare. Basic Economy consists of a Main Cabin ticket with numerous restrictions including waiting until check-in for a seat assignment, no upgrades or refunds, and boarding in the last group. Originally Basic Economy passengers could only carry a personal item, but American later revised their Basic Economy policies to allow for a carry-on bag. In May 2017, American announced it would be adding more seats to some of its Boeing 737 MAX 8 jets and reducing overall legroom in the basic economy class. The last three rows were to lose , going from the current . The remainder of the main cabin was to have of legroom. This "Project Oasis" seating configuration has since been expanded to all 737 MAX 8s as well as standard Boeing 737-800 and non-transcontinental Airbus A321 jets. New Airbus A321neo jets have been delivered with the same configuration. This configuration has been considered unpopular with passengers, especially American's frequent flyers, as the new seats have less padding, less legroom, and no seatback entertainment. Reward programs AAdvantage AAdvantage is the frequent flyer program for American Airlines. It was launched on May 1, 1981, and it remains the largest frequent flyer program with over 115 million members as of 2021. Miles accumulated in the program allow members to redeem tickets, upgrade service class, or obtain free or discounted car rentals, hotel stays, merchandise, or other products and services through partners. The most active members, based on the accumulation of Loyalty Points with American Airlines, are designated AAdvantage Gold, AAdvantage Platinum, AAdvantage Platinum Pro, and AAdvantage Executive Platinum elite members, with privileges such as separate check-in, priority upgrade, and standby processing, or free upgrades. AAdvantage status correspond with Oneworld status levels allowing elites to receive reciprocal benefits from American's oneworld partner airlines. AAdvantage co-branded credit cards are also available and offer other benefits. The cards are issued by CitiCards, a subsidiary of Citigroup, Barclaycard, and Bilt card in the United States, by several banks including Butterfield Bank and Scotiabank in the Caribbean, and by Banco Santander in Brazil. AAdvantage allows one-way redemption, starting at 7,500 miles. Admirals Club The Admirals Club was conceived by AA president C.R. Smith as a marketing promotion shortly after he was made an honorary Texas Ranger. Inspired by the Kentucky colonels and other honorary title designations, Smith decided to make particularly valued passengers "admirals" of the "Flagship fleet" (AA called its aircraft "Flagships" at the time). The list of admirals included many celebrities, politicians, and other VIPs, as well as more "ordinary" customers who had been particularly loyal to the airline. There was no physical Admirals Club until shortly after the opening of LaGuardia Airport. During the airport's construction, New York Mayor Fiorello LaGuardia had an upper-level lounge set aside for press conferences and business meetings. At one such press conference, he noted that the entire terminal was being offered for lease to airline tenants; after a reporter asked whether the lounge would be leased as well, LaGuardia replied that it would, and a vice president of AA immediately offered to lease the premises. The airline then procured a liquor license and began operating the lounge as the "Admirals Club" in 1939. The second Admirals Club opened at Washington National Airport. Because it was illegal to sell alcohol in Virginia at the time, the club contained refrigerators for the use of its members, so they could store their liquor at the airport. For many years, membership in the Admirals Club (and most other airline lounges) was by the airline's invitation. After a passenger sued for discrimination, the club switched to a paid membership program in 1974. Flagship Lounge Though affiliated with the Admirals Club and staffed by many of the same employees, the Flagship Lounge is a separate lounge specifically designed for customers flying in first class and business class on international flights and transcontinental domestic flights. Corporate affairs Business trends The key trends for American Airlines are (as of the financial year ending 31 December): Ownership and structure American Airlines, Inc., is publicly traded through its parent company, American Airlines Group Inc., under NASDAQ: AAL , with a market capitalization of about $12 billion as of 2019, and is included in the S&P 500 index. American Eagle is a network of six regional carriers that operate under a codeshare and service agreement with American, operating flights to destinations in the United States, Canada, the Caribbean, and Mexico. Three of these carriers are independent and three are subsidiaries of American Airlines Group: Envoy Air Inc., Piedmont Airlines, Inc., and PSA Airlines Inc. Headquarters American Airlines is headquartered across several buildings in Fort Worth, Texas that it calls the "Robert L. Crandall Campus" in honor of former president and CEO Robert Crandall. The square-foot, five-building office complex called was designed by Pelli Clarke Pelli Architects. The campus is located on 300 acres, adjacent to Dallas/Fort Worth International Airport, American's fortress hub. Before it was headquartered in Texas, American Airlines was headquartered at 633 Third Avenue in the Murray Hill area of Midtown Manhattan, New York City. In 1979, American moved its headquarters to a site at Dallas/Fort Worth International Airport, which affected up to 1,300 jobs. Mayor of New York City Ed Koch described the move as a "betrayal" of New York City. American moved to two leased office buildings in Grand Prairie, Texas. On January 17, 1983, the airline finished moving into a $150 million ($ when adjusted for inflation), facility in Fort Worth; $147 million (about $ when adjusted for inflation) in Dallas/Fort Worth International Airport bonds financed the headquarters. The airline began leasing the facility from the airport, which owns the facility. Following the merger of US Airways and American Airlines, the new company consolidated its corporate headquarters in Fort Worth, abandoning the US Airways headquarters in Phoenix, AZ. As of 2015, American Airlines is the corporation with the largest presence in Fort Worth. In 2015, American announced that it would build a new headquarters in Fort Worth. Groundbreaking began in the spring of 2016 and occupancy completed in September 2019. The airline plans to house 5,000 new workers in the building. It will be located on a property adjacent to the airline's flight academy and conference and training center, west of Texas State Highway 360, west from the current headquarters. The airline will lease a total of from Dallas–Fort Worth International Airport and this area will include the headquarters. Construction of the new headquarters began after the demolition of the Sabre facility, previously on the site. The airline considered developing a new headquarters in Irving, Texas, on the old Texas Stadium site, before deciding to keep the headquarters in Fort Worth. Corporate identity Logo In 1931, Goodrich Murphy, an American employee, designed the AA logo as an entry in a logo contest. The eagle in the logo was copied from a Scottish hotel brochure. The logo was redesigned by Massimo Vignelli in 1967. Thirty years later, in 1997, American Airlines was able to make its logo Internet-compatible by buying the domain AA.com. AA is also American's two-letter IATA airline designator. On January 17, 2013, American launched a new rebranding and marketing campaign with FutureBrand dubbed, "A New American". This included a new logo, which includes elements of the 1967 logo. American Airlines faced difficulty obtaining copyright registration for their 2013 logo. On June 3, 2016, American Airlines sought to register it with the United States Copyright Office, but in October of that year, the Copyright Office ruled that the logo was ineligible for copyright protection, as it did not pass the threshold of originality, and was thus in the public domain. American requested that the Copyright Office reconsider, but on January 8, 2018, the Copyright Office affirmed its initial determination. After American Airlines submitted additional materials, the Copyright Office reversed its decision on December 7, 2018, and ruled that the logo contained enough creativity to merit copyright protection. Aircraft livery American's early liveries varied widely, but a common livery was adopted in the 1930s, featuring an eagle painted on the fuselage. The eagle became a symbol of the company and inspired the name of American Eagle Airlines. Propeller aircraft featured an international orange lightning bolt running down the length of the fuselage, which was replaced by a simpler orange stripe with the introduction of jets. In the late 1960s, American commissioned designer Massimo Vignelli to develop a new livery. The original design called for a red, white, and blue stripe on the fuselage, and a simple "AA" logo, without an eagle, on the tail; instead, Vignelli created a highly stylized eagle, which remained the company's logo until January 16, 2013. On January 17, 2013, American unveiled a new livery. Before then, American had been the only major U.S. airline to leave most of its aircraft surfaces unpainted. This was because C. R. Smith would not say he liked painted aircraft and refused to use any liveries that involved painting the entire plane. Robert "Bob" Crandall later justified the distinctive natural metal finish by noting that less paint reduced the aircraft's weight, thus saving on fuel costs. In January 2013, American launched a new rebranding and marketing campaign dubbed, "The New American". In addition to a new logo, American Airlines introduced a new livery for its fleet. The airline calls the new livery and branding "a clean and modern update". The current design features an abstract American flag on the tail, along with a silver-painted fuselage, as a throw-back to the old livery. The new design was painted by Leading Edge Aviation Services in California. Doug Parker, the incoming CEO indicated that the new livery could be short-lived, stating that "maybe we need to do something slightly different than that ... The only reason this is an issue now is that they just did it right in the middle, which kind of makes it confusing, so that gives us an opportunity, actually, to decide if we are going to do something different because we have so many airplanes to paint". The current logo and livery have had mixed criticism, with Design Shack editor Joshua Johnson writing that they "boldly and proudly communicate the concepts of American pride and freedom wrapped into a shape that instantly makes you think about an airplane", and AskThePilot.com author Patrick Smith describing the logo as 'a linoleum knife poking through a shower curtain'. Later in January 2013, Bloomberg asked the designer of the 1968 American Airlines logo (Massimo Vignelli) on his opinion over the rebranding. In the end, American let their employees decide the new livery's fate. On an internal website for employees, American posted two options, one the new livery and one a modified version of the old livery. All of the American Airlines Group employees (including US Airways and other affiliates) were able to vote. American ultimately decided to keep the new look. Parker announced that American would keep a US Airways and America West heritage aircraft in the fleet, with plans to add a heritage TWA aircraft and a heritage American plane with the old livery. As of September 2019, American has heritage aircraft for Piedmont, PSA, America West, US Airways, Reno Air, TWA, and AirCal in their fleet. They also have two AA branded heritage 737-800 aircraft, an AstroJet N905NN, and the polished aluminum livery used from 1967 to 2013, N921NN. Customer Service American, both before and after the merger with US Airways, has consistently performed poorly in rankings. The Wall Street Journal's annual airline rankings have ranked American as the worst or second-worst U.S. carrier for ten of the past twelve years, and in the bottom three of U.S. Airlines for at least the past twelve years. The airline has persistently performed poorly in the areas of losing checked luggage and bumping passengers due to oversold flights. Worker relations The main representatives of key groups of employees are: The Allied Pilots Association is an in-house union which represents the nearly 15,000 American Airlines pilots; it was created in 1963 after the pilots left the Air Line Pilots Association (ALPA). However the majority of American Eagle pilots are ALPA members. The Association of Professional Flight Attendants represents American Airlines flight attendants, including former USAirways flight attendants. Flight attendants at wholly owned regional carriers (Envoy, Piedmont, and PSA) are all represented by Association of Flight Attendants – Communications Workers of America (AFA-CWA). US Airways flight attendants were active members of AFA-CWA before the merger, and they are honorary lifetime members. AFA-CWA is the largest flight attendant union in the industry. The Transport Workers Union-International Association of Machinists alliance (TWU-IAM) represents the majority of American Airlines employed fleet service agents, mechanics, and other ground workers. American's customer service and gate employees belong to the Communications Workers of America/International Brotherhood of Teamsters Passenger Service Association. Concerns and conflicts Environmental violations Between October 1993 to July 1998, American Airlines was repeatedly cited for using high-sulfur fuel in motor vehicles at 10 major airports around the country, a violation of the Clean Air Act. Lifetime AAirpass Since 1981, as a means of creating revenue in a period of loss-making, American Airlines had offered a lifetime pass of unlimited travel, for the initial cost of $250,000. This entitled the pass holder to fly anywhere in the world. Twenty-eight were sold. However, after some time, the airline realized they were making losses on the tickets, with the ticketholders costing them up to $1 million each. Ticketholders were booking large numbers of flights with some ticketholders flying interstate for lunch or flying to London multiple times a month. AA raised the cost of the lifetime pass to $3 million, and then finally stopped offering it in 2003. AA then used litigation to cancel two of the lifetime offers, saying the passes "had been terminated due to fraudulent activity". Cabin fume events In 1988, on American Airlines Flight 132's approach into Nashville, flight attendants notified the cockpit that there was smoke in the cabin. The flight crew in the cockpit ignored the warning, as on a prior flight, a fume event had occurred due to a problem with the auxiliary power unit. However, the smoke on Flight 132 was caused by improperly packaged hazardous materials. According to the NTSB inquiry, the cockpit crew persistently refused to acknowledge that there was a serious threat to the aircraft or the passengers, even after they were told that the floor was becoming soft and passengers had to be reseated. As a result, the aircraft was not evacuated immediately on landing, exposing the crew and passengers to the threat of smoke and fire longer than necessary. On April 11, 2007, toxic smoke and oil fumes leaked into the aircraft cabin as American Airlines Flight 843 taxied to the gate. A flight attendant who was present in the cabin subsequently filed a lawsuit against Boeing, stating that she was diagnosed with neurotoxic disorder due to her exposure to the fumes, which caused her to experience memory loss, tremors, and severe headaches. She settled with the company in 2011. In 2009, Mike Holland, deputy chairman for radiation and environmental issues at the Allied Pilots Association and an American Airlines pilot, said that the pilot union had started alerting pilots of the danger of contaminated bleed air, including contacting crew members that the union thinks were exposed to contamination based on maintenance records and pilot logs. In a January 2017 incident on American Airlines Flight 1896, seven flight attendants were hospitalized after a strange odor was detected in the cabin. The Airbus A330 involved subsequently underwent a "thorough maintenance inspection", having been involved in three fume events in three months. In August 2018, American Airlines flight attendants picketed in front of the Fort Worth company headquarters over a change in sick day policy, complaining that exposure to ill passengers, toxic uniforms, toxic cabin air, radiation exposure, and other issues were causing them to be sick. In January 2019, two pilots and three flight attendants on Flight 1897 from Philadelphia to Fort Lauderdale were hospitalized following complaints of a strange odor. Discrimination complaints On October 24, 2017, the NAACP issued a travel advisory for American Airlines urging African Americans to "exercise caution" when traveling with the airline. The NAACP issued the advisory after four incidents. In one incident, a black woman was moved from first class to coach while her white traveling companion was allowed to remain in first class. In another incident, a black man was forced to give up his seats after being confronted by two unruly white passengers. According to the NAACP, while they did receive complaints on other airlines, most of their complaints in the year before their advisory were on American Airlines. In July 2018, the NAACP lifted their travel advisory saying that American has made improvements to mitigate discrimination and unsafe treatment of African Americans. Accidents and incidents As of March 2019, the airline has had almost sixty aircraft hull losses, beginning with the crash of an American Airways Ford 5-AT-C Trimotor in August 1931. Of these most were propeller driven aircraft, including three Lockheed L-188 Electra turboprop aircraft (of which one, the crash in 1959 of Flight 320, resulted in fatalities). The two accidents with the highest fatalities in both the airline's and U.S. aviation history were Flight 191 in 1979 and Flight 587 in 2001. Out of the 17 hijackings of American Airlines flights, two aircraft were hijacked and destroyed in the September 11 attacks: Flight 11 crashed into the north facade of the North Tower of the World Trade Center, and Flight 77 crashed into the Pentagon; both were bound for LAX from Boston Logan International Airport and Washington Dulles International Airport respectively. Other accidents include the Flight 383 engine failure and fire in 2016. There were two training flight accidents in which the crew were killed and six that resulted in no fatalities. Another four jet aircraft have been written off due to incidents while they were parked between flights or while undergoing maintenance. Carbon footprint American Airlines reported total CO2e emissions (direct and indirect) for the twelve months ending December 31, 2020, at 20,092 Kt (-21,347 /-51.5% y-o-y). The company aims to achieve net zero carbon emissions by 2050. See also AAirpass Air transportation in the United States List of airlines of the United States List of airports in the United States U.S. Airways, which merged with American Airlines in 2013 Notes and references Notes References Further reading External links Official American Airlines Vacations website 1934 establishments in the United States Airlines based in Texas Airlines established in 1934 Airlines for America members American Airlines Group American companies established in 1934 Aviation in Arizona Companies based in Fort Worth, Texas Companies that filed for Chapter 11 bankruptcy in 2011
2389
https://en.wikipedia.org/wiki/Auger%20effect
Auger effect
The Auger effect (; ) or Auger−Meitner effect is a physical phenomenon in which the filling of an inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom. When a core electron is removed, leaving a vacancy, an electron from a higher energy level may fall into the vacancy, resulting in a release of energy. For light atoms (Z<12), this energy is most often transferred to a valence electron which is subsequently ejected from the atom. This second ejected electron is called an Auger electron. For heavier atomic nuclei, the release of the energy in the form of an emitted photon becomes gradually more probable. Effect Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected. These energy levels depend on the type of atom and the chemical environment in which the atom was located. Auger electron spectroscopy involves the emission of Auger electrons by bombarding a sample with either X-rays or energetic electrons and measures the intensity of Auger electrons that result as a function of the Auger electron energy. The resulting spectra can be used to determine the identity of the emitting atoms and some information about their environment. Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole (electron-hole pair) can recombine giving up their energy to an electron in the conduction band, increasing its energy. The reverse effect is known as impact ionization. The Auger effect can impact biological molecules such as DNA. Following the K-shell ionization of the component atoms of DNA, Auger electrons are ejected leading to damage of its sugar-phosphate backbone. Discovery The Auger emission process was observed and published in 1922 by Lise Meitner, an Austrian-Swedish physicist, as a side effect in her competitive search for the nuclear beta electrons with the British physicist Charles Drummond Ellis. The French physicist Pierre Victor Auger independently discovered it in 1923 upon analysis of a Wilson cloud chamber experiment and it became the central part of his PhD work. High-energy X-rays were applied to ionize gas particles and observe photoelectric electrons. The observation of electron tracks that were independent of the frequency of the incident photon suggested a mechanism for electron ionization that was caused from an internal conversion of energy from a radiationless transition. Further investigation, and theoretical work using elementary quantum mechanics and transition rate/transition probability calculations, showed that the effect was a radiationless effect more than an internal conversion effect. See also Auger therapy Charge carrier generation and recombination Characteristic X-ray Coster–Kronig transition Electron capture Radiative Auger effect References Atomic physics Foundational quantum physics Electron spectroscopy
2391
https://en.wikipedia.org/wiki/Akio%20Morita
Akio Morita
was a Japanese entrepreneur and co-founder of Sony along with Masaru Ibuka. Early life Akio Morita was born in Nagoya. Morita's family was involved in sake, miso and soy sauce production in the village of Kosugaya (currently a part of Tokoname City) on the western coast of Chita Peninsula in Aichi Prefecture since 1665. He was the oldest of four siblings and his father Kyuzaemon trained him as a child to take over the family business. Akio, however, found his true calling in mathematics and physics, and in 1944 he graduated from Osaka Imperial University with a degree in physics. He was later commissioned as a sub-lieutenant in the Imperial Japanese Navy, and served in World War II. During his service, Morita met his future business partner Masaru Ibuka at a study group for developing infrared-guided bomb (Ke-Go) in the Navy's Wartime Research Committee. Sony In September 1945, Ibuka founded a radio repair shop in the bombed out Shirokiya Department Store in Nihonbashi, Tokyo. Morita saw a newspaper article about Ibuka's new venture and, after some correspondence, chose to join him in Tokyo. With funding from Morita's father, they co-founded Tokyo Tsushin Kogyo Kabushiki Kaisha (Tokyo Telecommunications Engineering Corporation, the forerunner of Sony Corporation) in 1946 with about 20 employees and initial capital of ¥190,000. In 1949, the company developed magnetic recording tape and, in 1950, sold the first tape recorder in Japan. Ibuka was instrumental in securing the licensing of transistor technology from Bell Labs to Sony in the 1950s, thus making Sony one of the first companies to apply transistor technology to non-military uses. In 1957, the company produced a pocket-sized radio (the first to be fully transistorized), and in 1958, Morita and Ibuka decided to rename their company Sony Corporation (derived from "sonus"—–Latin for "sound"—–and "sonny", a then-common American expression). Morita was an advocate for all the products made by Sony. However, since the radio was slightly too big to fit in a shirt pocket, Morita made his employees wear shirts with slightly larger pockets to give the radio a "pocket sized" appearance. Morita founded Sony Corporation of America (SONAM, currently abbreviated as SCA) in 1960. In the process, he was struck by the mobility of employees between American companies, which was unheard of in Japan at that time. When he returned to Japan, he encouraged experienced, middle-aged employees of other companies to reevaluate their careers and consider joining Sony. The company filled many positions in this manner, and inspired other Japanese companies to do the same. In 1961, Sony Corporation was the first Japanese company to be listed on the New York Stock Exchange, in the form of American depositary receipts (ADRs). In March 1968, Morita set up a joint venture in Japan between Sony and CBS Records, with him as president, to manufacture "software" for Sony's hardware. Morita became president of Sony in 1971, taking over from Ibuka who had served from 1950 to 1971. In 1975, Sony released the first Betamax home videocassette recorder, a year before the VHS format came out. Ibuka retired in 1976 and Morita was named chairman of the company. In 1979, the Walkman was introduced, making it one of the world's first portable music players and in 1982, Sony launched the world's first compact disc player, the Sony CDP-101, with a compact disc (CD) itself, a new data storage format Sony and Philips co-developed. In that year, a 3.5-inch floppy disk structure was introduced by Sony and it soon became the defacto standard. In 1984, Sony launched the Discman series which extended their Walkman brand to portable CD products. Under the vision of Morita, the company aggressively expanded into new businesses. Part of its motivation for doing so was the pursuit of "convergence", linking film, music and digital electronics. Twenty years after setting up a joint venture with CBS Records in Japan, Sony bought CBS Records Group which consisted of Columbia Records, Epic Records and other CBS labels. In 1989, they acquired Columbia Pictures Entertainment (Columbia Pictures, TriStar Pictures and others). Norio Ohga, who had joined the company in the 1950s after sending Morita a letter denouncing the poor quality of the company's tape recorders, succeeded Morita as chief executive officer in 1989. Morita suffered a cerebral hemorrhage in 1993 while playing tennis and on November 25, 1994, stepped down as Sony chairman to be succeeded by Ohga. Other affiliations Morita was vice chairman of the Japan Business Federation (Japan Federation of Economic Organizations), and was a member of the Japan-U.S. Economic Relations Group, also known as the "Wise Men's Group". He helped General Motors with their acquisition of an interest in Isuzu Motors in 1972. He was the third Japanese chairman of the Trilateral Commission. His amateur radio call sign is JP1DPJ. Publications In 1966, Morita wrote a book called Gakureki Muyō Ron (学歴無用論, Never Mind School Records), where he stresses that school records are not important to success or one's business skills. In 1986, Morita wrote an autobiography titled Made in Japan. He co-authored the 1991 book The Japan That Can Say No with politician Shintaro Ishihara, where they criticized American business practices and encouraged Japanese to take a more independent role in business and foreign affairs. (Actually, Morita had no intention to criticize American practices at that time.) The book was translated into English and caused controversy in the United States, and Morita later had his chapters removed from the English version and distanced himself from the book. Awards and honours In 1972, Morita received the Golden Plate Award of the American Academy of Achievement. Morita was awarded the Albert Medal by the United Kingdom's Royal Society of Arts in 1982, the first Japanese to receive the honor. Two years later, he received the prestigious Legion of Honour, and in 1991, was awarded the First Class Order of the Sacred Treasure from the Emperor of Japan. He was elected to the American Philosophical Society in 1992 and the American Academy of Arts and Sciences in 1993. That same year, he was awarded an honorary British knighthood (KBE). Morita received the International Distinguished Entrepreneur Award from the University of Manitoba in 1987. In 1998, he was the only Asian person on Time magazine's list of the 20 most influential business people of the 20th century as part of their Time 100: The Most Important People of the Century. He was posthumously awarded the Grand Cordon of the Order of the Rising Sun in 1999. In 2003, Anaheim University's Graduate School of Business was renamed the Akio Morita School of Business in his honor. The Morita family's support for the program led to the growth of the Anaheim University Akio Morita School of Business in Tokyo, Japan. Television commercials American Express (1984) Death Morita, who loved to play golf and tennis and to watch movies when rainy, suffered a stroke in 1993, during a game of tennis. The stroke weakened him and left him in a wheelchair. On November 25, 1994, he stepped down as Sony chairman. On October 3, 1999, Morita died of pneumonia at the age of 78 in a Tokyo hospital, where he had been admitted since August 1999. References Further reading Morita, Akio. Made in Japan (New York: Dutton, 1986, ) Morita, Akio. Never Mind School Records (1966) ( in Japanese) Morita, Akio (Co-Author) and Shintaro Ishihara. The Japan That Can Say No (Simon & Schuster, 1991, , in Japanese) List of books authored by Akio Morita at WorldCat External links Akio Morita Library Time magazine, AKIO MORITA: Guru Of Gadgets Time Asia, Time 100: Akio Morita Sony Biographical notes PBS notes Full Biography at World of Biography Akio Morita Facts The Morita Family(in Japanese) 1921 births 1999 deaths Honorary Knights Commander of the Order of the British Empire 20th-century Japanese businesspeople Japanese company founders Imperial Japanese Navy personnel of World War II Recipients of the Legion of Honour Recipients of the Order of the Sacred Treasure People from Nagoya Businesspeople from Tokyo Sony people Osaka University alumni International Emmy Directorate Award Imperial Japanese Navy officers Japanese industrialists Deaths from pneumonia in Japan Members of the American Philosophical Society
2392
https://en.wikipedia.org/wiki/Anode
Anode
An anode is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging). In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation. Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc. Charge flow The terms anode and cathode are not defined by the voltage polarity of electrodes but the direction of current through the electrode. An anode is an electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode is an electrode through which conventional current flows out of the device. If the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed. Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode. The definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can emit electrons into the evacuated tube due to being heated by a filament, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode. Examples The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power: In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards. In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging. In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal. In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube. Etymology The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises". The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'. Electrolytic anode In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction). This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper. Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction. Battery or galvanic cell anode In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Note: in a galvanic cell, contrary to what occurs in an electrolytic cell, no anions flow to the anode, the internal current being entirely accounted for by the cations flowing away from it (cf drawing). Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though technically incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles. Vacuum tube anode In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons. Diode anode In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage). Sacrificial anode In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters. In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit. A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes. If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron. Impressed current anode Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, and water heaters. Related antonym The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction. See also Anodizing Galvanic anode Gas-filled tube Primary cell Redox (reduction–oxidation) References External links The Cathode Ray Tube site How to define anode and cathode Valence Technologies Inc. battery education page Cathodic Protection Technical Library Electrodes
2393
https://en.wikipedia.org/wiki/Analog%20television
Analog television
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent. Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television. All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning in the 2000s, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa and Asia. Development The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely. Analog television did not begin in earnest as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to trace lines across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also, far less maintenance was required of an all-electronic system compared to a mechanical spinning disc system. All-electronic systems became popular with households after World War II. Standards Broadcasters of analog television encode their signal using different systems. The official systems of transmission were defined by the ITU in 1961 as: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, and so on. A color encoding scheme (NTSC, PAL, or SECAM) could be added to the base monochrome signal. Using RF modulation the signal is then modulated onto a very high frequency (VHF) or ultra high frequency (UHF) carrier wave. Each frame of a television image is composed of scan lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The process repeats and next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial television systems were black-and-white; the beginning of color television was in the 1950s. A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection. Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s were standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the chrominance information was added to the monochrome signals in a way that black and white televisions ignore. In this way backward compatibility was achieved. There are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC system. The European and Australian PAL and the French and former Soviet Union SECAM standards were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC. PAL had a late evolution called PALplus, allowing widescreen broadcasts while remaining fully compatible with existing PAL equipment. In principle, all three color encoding systems can be used with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it's necessary to quote the color system plus the broadcast standard as a capital letter. For example, the United States, Canada, Mexico and South Korea use NTSC-M, Japan uses NTSC-J, the UK uses PAL-I, France uses SECAM-L, much of Western Europe and Australia use PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on. Not all of the possible combinations exist. NTSC is only used with system M, even though there were experiments with NTSC-A (405 line) in the UK and NTSC-N (625 line) in part of South America. PAL is used with a variety of 625-line standards (B, G, D, K, I, N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards. For this reason, many people refer to any 625/25 type signal as PAL and to any 525/30 signal as NTSC, even when referring to digital signals; for example, on DVD-Video, which does not contain any analog color encoding, and thus no PAL or NTSC signals at all. Although a number of different broadcast television systems are in use worldwide, the same principles of operation apply. Displaying an image A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line, the beam returns to the start of the next line; at the end of the last line, the beam returns to the beginning of the first line at the top of the screen. As it passes each point, the intensity of the beam is varied, varying the luminance of that point. A color television system is similar except there are three beams that scan together and an additional signal known as chrominance controls the color of the spot. When analog television was developed, no affordable technology for storing video signals existed; the luminance signal had to be generated and transmitted at the same time at which it is displayed on the CRT. It was therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television. The physics of the CRT require that a finite time interval be allowed for the spot to move back to the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The timing of the luminance signal must allow for this. The human eye has a characteristic called phi phenomenon. Quickly displaying successive scan images creates the illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is rapid on-screen motion occurring. The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video frames per second and further reduces flicker and other defects in transmission. Receiving signals The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one carrier frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal. The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be nearly twice the video bandwidth if pure AM was used. Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier performs amplification to the IF stages from the microvolt range to fractions of a volt. Extracting the sound At this point the IF signal consists of a video carrier signal at one frequency and the sound carrier at a fixed offset in frequency. A demodulator recovers the video signal. Also at the output of the same demodulator is a new frequency modulated sound carrier at the offset frequency. In some sets made before 1948, this was filtered out, and the sound IF of about 22 MHz was sent to an FM demodulator to recover the basic sound signal. In newer sets, this new carrier at the offset frequency was allowed to remain as intercarrier sound, and it was sent to an FM demodulator to recover the basic sound signal. One particular advantage of intercarrier sound is that when the front panel fine tuning knob is adjusted, the sound carrier frequency does not change with the tuning, but stays at the above-mentioned offset frequency. Consequently, it is easier to tune the picture without losing the sound. So the FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, television sound transmissions were monophonic. Structure of a video signal The video carrier is demodulated to give a composite video signal containing luminance, chrominance and synchronization signals. The result is identical to the composite video format used by analog video devices such as VCRs or CCTV cameras. To ensure good linearity and thus fidelity, consistent with affordable manufacturing costs of transmitters and receivers, the video carrier is never modulated to the extent that it is shut off altogether. When intercarrier sound was introduced later in 1948, not completely shutting off the carrier had the side effect of allowing intercarrier sound to be economically implemented. Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC, and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the colorburst, and the chrominance signal) are not present. The front porch is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line's sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The front porch is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the back porch. The back porch is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse. In color television systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system, it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference. In some professional systems, particularly satellite links between locations, the digital audio is embedded within the line sync pulses of the video signal, to save the cost of renting a second channel. The name for this proprietary system is Sound-in-Syncs. Monochrome video signal extraction The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the black level. In the NTSC system, there is a blanking signal level used during the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM these are identical. In a monochrome receiver, the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively. Color video signal extraction U and V signals A color signal conveys picture information for each of the red, green, and blue components of an image. However, these are not simply transmitted as three separate signals, because: such a signal would not be compatible with monochrome receivers, an important consideration when color broadcasting was first introduced. It would also occupy three times the bandwidth of existing television, requiring a decrease in the number of television channels available. Instead, the RGB signals are converted into YUV form, where the Y signal represents the luminance of the colors in the image. Because the rendering of colors in this way is the goal of both monochrome film and television systems, the Y signal is ideal for transmission as the luminance signal. This ensures a monochrome receiver will display a correct picture in black and white, where a given color is reproduced by a shade of gray that correctly reflects how light or dark the original color is. The U and V signals are color difference signals. The U signal is the difference between the B signal and the Y signal, also known as B minus Y (B-Y), and the V signal is the difference between the R signal and the Y signal, also known as R minus Y (R-Y). The U signal then represents how purplish-blue or its complementary color, yellowish-green, the color is, and the V signal how purplish-red or it's complementary, greenish-cyan, it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to detail in luminance than in color, the U and V signals can be transmitted with reduced bandwidth with acceptable results. In the receiver, a single demodulator can extract an additive combination of U plus V. An example is the X demodulator used in the X/Z demodulation system. In that same system, a second demodulator, the Z demodulator, also extracts an additive combination of U plus V, but in a different ratio. The X and Z color difference signals are further matrixed into three color difference signals, (R-Y), (B-Y), and (G-Y). The combinations of usually two, but sometimes three demodulators were: In the end, further matrixing of the above color-difference signals c through f yielded the three color-difference signals, (R-Y), (B-Y), and (G-Y). The R, G, and B signals in the receiver needed for the display device (CRT, Plasma display, or LCD display) are electronically derived by matrixing as follows: R is the additive combination of (R-Y) with Y, G is the additive combination of (G-Y) with Y, and B is the additive combination of (B-Y) with Y. All of this is accomplished electronically. It can be seen that in the combining process, the low-resolution portion of the Y signals cancel out, leaving R, G, and B signals able to render a low-resolution image in full color. However, the higher resolution portions of the Y signals do not cancel out, and so are equally present in R, G, and B, producing the higher-resolution image detail in monochrome, although it appears to the human eye as a full-color and full-resolution picture. NTSC and PAL systems In the NTSC and PAL color systems, U and V are transmitted by using quadrature amplitude modulation of a subcarrier. This kind of modulation applies two independent signals to one subcarrier, with the idea that both signals will be recovered independently at the receiving end. For NTSC, the subcarrier is at 3.58 MHz. For the PAL system it is at 4.43 MHz. The subcarrier itself is not included in the modulated signal (suppressed carrier), it is the subcarrier sidebands that carry the U and V information. The usual reason for using suppressed carrier is that it saves on transmitter power. In this application a more important advantage is that the color signal disappears entirely in black and white scenes. The subcarrier is within the bandwidth of the main luminance signal and consequently can cause undesirable artifacts on the picture, all the more noticeable in black and white receivers. A small sample of the subcarrier, the colorburst, is included in the horizontal blanking portion, which is not visible on the screen. This is necessary to give the receiver a phase reference for the modulated signal. Under quadrature amplitude modulation the modulated chrominance signal changes phase as compared to its subcarrier and also changes amplitude. The chrominance amplitude (when considered together with the Y signal) represents the approximate saturation of a color, and the chrominance phase against the subcarrier reference approximately represents the hue of the color. For particular test colors found in the test color bar pattern, exact amplitudes and phases are sometimes defined for test and troubleshooting purposes only. Due to the nature of the quadrature amplitude modulation process that created the chrominance signal, at certain times, the signal represents only the U signal, and 70 nanoseconds (NTSC) later, it represents only the V signal. About 70 nanoseconds later still, -U, and another 70 nanoseconds, -V. So to extract U, a synchronous demodulator is utilized, which uses the subcarrier to briefly gate the chroma every 280 nanoseconds, so that the output is only a train of discrete pulses, each having an amplitude that is the same as the original U signal at the corresponding time. In effect, these pulses are discrete-time analog samples of the U signal. The pulses are then low-pass filtered so that the original analog continuous-time U signal is recovered. For V, a 90-degree shifted subcarrier briefly gates the chroma signal every 280 nanoseconds, and the rest of the process is identical to that used for the U signal. Gating at any other time than those times mentioned above will yield an additive mixture of any two of U, V, -U, or -V. One of these off-axis (that is, of the U and V axis) gating methods is called I/Q demodulation. Another much more popular off-axis scheme was the X/Z demodulation system. Further matrixing recovered the original U and V signals. This scheme was actually the most popular demodulator scheme throughout the 1960s. The above process uses the subcarrier. But as previously mentioned, it was deleted before transmission, and only the chroma is transmitted. Therefore, the receiver must reconstitute the subcarrier. For this purpose, a short burst of the subcarrier, known as the colorburst, is transmitted during the back porch (re-trace blanking period) of each scan line. A subcarrier oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, resulting in the oscillator producing the reconstituted subcarrier. NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal, caused sometimes by multipath, but mostly by poor implementation at the studio end. With the advent of solid-state receivers, cable TV, and digital studio equipment for conversion to an over-the-air analog signal, these NTSC problems have been largely fixed, leaving operator error at the studio end as the sole color rendition weakness of the NTSC system. In any case, the PAL D (delay) system mostly corrects these kinds of errors by reversing the phase of the signal on each successive line, and averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. Phase shift errors between successive lines are therefore canceled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined. NTSC is more spectrum efficient than PAL, giving more picture detail for a given bandwidth. This is because sophisticated comb filters in receivers are more effective with NTSC's 4 color frame sequence compared to PAL's 8-field sequence. However, in the end, the larger channel width of most PAL systems in Europe still gives PAL systems the edge in transmitting more picture detail. SECAM system In the SECAM television system, U and V are transmitted on alternate lines, using simple frequency modulation of two different color subcarriers. In some analog color CRT displays, starting in 1956, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple CRT matrix mixing technique was replaced in later solid state designs of signal processing with the original matrixing method used in the 1954 and 1955 color TV receivers. Synchronization Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen. A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Horizontal synchronization The horizontal sync pulse, separates the scan lines. The horizontal sync signal is a single short pulse that indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse. The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 μs pulse at 0 V. In the 625-line PAL system the pulse is 4.7 μs at 0 V. This is lower than the amplitude of any video signal (blacker than black) so it can be detected by the level-sensitive sync separator circuit of the receiver. Two-timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. Vertical synchronization Vertical synchronization separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of horizontal sync pulses through almost the entire length of the scan line. The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through). The format of such a signal in 525-line NTSC is: pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines) long-sync pulses (5 pulses) post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines) Each pre- or post-equalizing pulse consists of half a scan line of black signal: 2 μs at 0 V, followed by 30 μs at 0.3 V. Each long sync pulse consists of an equalizing pulse with timings inverted: 30 μs at 0  V, followed by 2 μs at 0.3  V. In video production and computer graphics, changes to the image are often performed during the vertical blanking interval to avoid visible discontinuity of the image. If this image in the framebuffer is updated with a new image while the display is being refreshed, the display shows a mishmash of both frames, producing page tearing partway down the image. Horizontal and vertical hold The sweep (or deflection) oscillators were designed to run without a signal from the television station (or VCR, computer, or other composite video source). This allows the television receiver to display a raster and to allow an image to be presented during antenna placement. With sufficient signal strength, the receiver's sync separator circuit would split timebase pulses from the incoming video and use them to reset the horizontal and vertical oscillators at the appropriate time to synchronize with the signal from the station. The free-running oscillation of the horizontal circuit is especially critical, as the horizontal deflection circuits typically power the flyback transformer (which provides acceleration potential for the CRT) as well as the filaments for the high voltage rectifier tube and sometimes the filament(s) of the CRT itself. Without the operation of the horizontal oscillator and output stages in these television receivers, there would be no illumination of the CRT's face. The lack of precision timing components in early equipment meant that the timebase circuits occasionally needed manual adjustment. If their free-run frequencies were too far from the actual line and field rates, the circuits would not be able to follow the incoming sync signals. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Older analog television receivers often provide manual controls to adjust horizontal and vertical timing. The adjustment takes the form of horizontal hold and vertical hold controls, usually on the front panel along with other common controls. These adjust the free-run frequencies of the corresponding timebase oscillators. A slowly rolling vertical picture demonstrates that the vertical oscillator is nearly synchronized with the television station but is not locking to it, often due to a weak signal or a failure in the sync separator stage not resetting the oscillator. Horizontal sync errors cause the image to be torn diagonally and repeated across the screen as if it were wrapped around a screw or a barber's pole; the greater the error, the more copies of the image will be seen at once wrapped around the barber pole. By the early 1980s the efficacy of the synchronization circuits, plus the inherent stability of the sets' oscillators, had been improved to the point where these controls were no longer necessary. Integrated Circuits which eliminated the horizontal hold control were starting to appear as early as 1969. The final generations of analog television receivers used IC-based designs where the receiver's timebases were derived from accurate crystal oscillators. With these sets, adjustment of the free-running frequency of either sweep oscillator was unnecessary and unavailable. Horizontal and vertical hold controls were rarely used in CRT-based computer monitors, as the quality and consistency of components were quite high by the advent of the computer age, but might be found on some composite monitors used with the 1970s–80s home or personal computers. Other technical information Components of a television system The tuner is the object which, with the aid of an antenna, isolates the television signals received over the air. There are two types of tuners in analog television, VHF and UHF tuners. The VHF tuner selects the VHF television frequency. This consists of a 4 MHz video bandwidth and a 2 MHz audio bandwidth. It then amplifies the signal and converts it to a 45.75 MHz Intermediate Frequency (IF) amplitude-modulated video and a 41.25 MHz IF frequency-modulated audio carrier. The IF amplifiers are centered at 44 MHz for optimal frequency transference of the audio and video carriers. Like radio, television has automatic gain control (AGC). This controls the gain of the IF amplifier stages and the tuner. The video amp and output amplifier is implemented using a pentode or a power transistor. The filter and demodulator separates the 45.75 MHz video from the 41.25 MHz audio then it simply uses a diode to detect the video signal. After the video detector, the video is amplified and sent to the sync separator and then to the picture tube. The audio signal goes to a 4.5 MHz amplifier. This amplifier prepares the signal for the 4.5Mhz detector. It then goes through a 4.5 MHz IF transformer to the detector. In television, there are 2 ways of detecting FM signals. One way is by the ratio detector. This is simple but very hard to align. The next is a relatively simple detector. This is the quadrature detector. It was invented in 1954. The first tube designed for this purpose was the 6BN6 type. It is easy to align and simple in circuitry. It was such a good design that it is still being used today in the Integrated circuit form. After the detector, it goes to the audio amplifier. The next part is the sync separator and clipper. From the detected video signal, this circuit extracts and conditions signals that the horizontal and vertical oscillators can use to keep in sync with the video. It also forms the AGC voltage, as previously stated. The horizontal and vertical oscillators form the raster on the CRT. They are driven by the sync separator. There are many ways to create these oscillators. The earliest is the thyratron oscillator. Although it is known to drift, it makes a perfect sawtooth wave. This sawtooth wave is so good that no linearity control is needed. This oscillator was designed for the electrostatic deflection CRTs but also found some use in electromagnetically deflected CRTs. The next oscillator developed was the blocking oscillator which uses a transformer to create a sawtooth wave. This was only used for a brief time period and never was very popular. Finally the multivibrator was probably the most successful. It needed more adjustment than the other oscillators, but it is very simple and effective. This oscillator was so popular that it was used from the early 1950s until today. Two oscillator amplifiers are neede. The vertical amplifier directly drives the yoke. Since it operates at 50 or 60 Hz and drives an electromagnet, it is similar to an audio amplifier. Because of the rapid deflection required, the horizontal oscillator requires a high-power flyback transformer driven by a high-powered tube or transistor. Additional windings on this flyback transformer typically power other parts of the system. Sync separator Image synchronization is achieved by transmitting negative-going pulses. The horizontal sync signal is a single short pulse that indicates the start of every line. Two-timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The vertical sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace. In the television receiver, a sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Counting sync pulses, a video line selector picks a selected line from a TV signal, used for teletext, on-screen displays, station identification logos as well as in the industry when cameras were used as a sensor. Timebase circuits In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase circuits (commonly called "sweep circuits" in the United States), each consisting of an oscillator and an amplifier. These generate modified sawtooth and parabola current waveforms to scan the electron beam in a linear way. The waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. The oscillators are designed to free-run at frequencies very close to the field and line rates, but the sync pulses cause them to reset at the beginning of each scan line or field, resulting in the necessary synchronization of the beam sweep with the originating signal. The output waveforms from the timebase amplifiers are fed to the horizontal and vertical deflection coils wrapped around the CRT tube. These coils produce magnetic fields proportional to the changing current, and these deflect the electron beam across the screen. In the 1950s, the power for these circuits was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier valve (tube) or semiconductor diode. This avoided the cost of a large high voltage mains supply (50 or 60 Hz) transformer. This type of circuit was used for the thermionic valve (vacuum tube) technology. It was inefficient and produced a lot of heat which led to premature failures in the circuitry. Although failure was common, it was easily repairable. In the 1960s, semiconductor technology was introduced into timebase circuits. During the late 1960s in the UK, synchronous (with the scan line rate) power generation was introduced into solid state receiver designs. These had very complex circuits in which faults were difficult to trace, but had very efficient use of power. In the early 1970s AC mains (50 or 60 Hz), and line timebase (15,625 Hz), thyristor based switching circuits were introduced. In the UK use of the simple (50  Hz) types of power, circuits were discontinued. The reason for design changes arose from the electricity supply contamination problems arising from EMI, and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform. CRT flyback power supply Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10–30 kV) for correct operation. This voltage is not directly produced by the main power supply circuitry; instead, the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched through the line output transformer, and alternating current (AC) is induced into the scan coils. At the end of each horizontal scan line the magnetic field, which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this negative reverse emf. A small value capacitor is connected across the scan switching device. This tunes the circuit inductances to resonate at a much higher frequency. This slows down (lengthens) the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high-voltage pulse to a Cockcroft–Walton generator design voltage multiplier. This produces the required EHT supply. A flyback converter is a power supply circuit operating on similar principles. A typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, (known as a diode split line output transformer or an Integrated High Voltage Transformer (IHVT)), so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well-insulated high voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used. Transition to digital In many countries, over-the-air broadcast television of analog audio and analog video signals has been discontinued, to allow the re-use of the television broadcast radio spectrum for other services such as datacasting and subchannels. The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands; in 2007 by Finland, Andorra, Sweden and Switzerland; in 2008 by Belgium (Flanders) and Germany; in 2009 by the United States (high power stations), southern Canada, the Isle of Man, Norway, and Denmark. In 2010, Belgium (Wallonia), Spain, Wales, Latvia, Estonia, the Channel Islands, San Marino, Croatia, and Slovenia; in 2011 Israel, Austria, Monaco, Cyprus, Japan (excluding Miyagi, Iwate, and Fukushima prefectures), Malta and France; in 2012 the Czech Republic, Arab World, Taiwan, Portugal, Japan (including Miyagi, Iwate, and Fukushima prefectures), Serbia, Italy, Canada, Mauritius, the United Kingdom, the Republic of Ireland, Lithuania, Slovakia, Gibraltar, and South Korea; in 2013, the Republic of Macedonia, Poland, Bulgaria, Hungary, Australia, and New Zealand, completed the transition. The United Kingdom made the transition to digital television between 2008 and 2012, with the exception of Whitehaven, which made the switch over in 2007. The first digital TV-only area in the United Kingdom was Ferryside in Carmarthenshire. The Digital television transition in the United States for high-powered transmission was completed on 12 June 2009, the date that the Federal Communications Commission (FCC) set. Almost two million households could no longer watch television because they had not prepared for the transition. The switchover had been delayed by the DTV Delay Act. While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of television stations in the U.S.: low-power broadcasting stations, class A stations, and television translator stations. They were given later deadlines. In broadcasting, the United States influences southern Canada and northern Mexico because those areas are covered by television stations in the U.S. In Japan, the switch to digital began in northeastern Ishikawa Prefecture on 24 July 2010 and ended in 43 of the country's 47 prefectures (including the rest of Ishikawa) on 24 July 2011, but in Fukushima, Iwate, and Miyagi prefectures, the conversion was delayed to 31 March 2012, due to complications from the 2011 Tōhoku earthquake and tsunami and its related nuclear accidents. In Canada, most of the larger cities turned off analog broadcasts on 31 August 2011. China had scheduled to end analog broadcasting between 2015 and 2018. Brazil switched to digital television on 2 December 2007 in its major cities. It is estimated that Brazil will end analog broadcasting in 2023. In Malaysia, the Malaysian Communications & Multimedia Commission (MCMC) advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742  MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial transmission/television broadcast (DTTB) channel. Large portions of Malaysia are covered by television broadcasts from Singapore, Thailand, Brunei, and Indonesia (from Borneo and Batam). Starting from 1 November 2019, all regions in Malaysia were no longer using the analog system after the states of Sabah and Sarawak finally turned it off on 31 October 2019. In Singapore, digital television under DVB-T2 began on 16 December 2013. The switchover was delayed many times until analog TV was switched off at midnight on 2 January 2019. In the Philippines, the National Telecommunications Commission required all broadcasting companies to end analog broadcasting on 31 December 2015 at 11:59 p.m. Due to delay of the release of the implementing rules and regulations for digital television broadcast, the target date was moved to 2020. Full digital broadcast is expected in 2021 and all of the analog TV services should be shut down by the end of 2023. In the Russian Federation, the Russian Television and Radio Broadcasting Network (RTRS) disabled analog broadcasting of federal channels in five stages, shutting down broadcasting in multiple federal subjects at each stage. The first region to have analog broadcasting disabled was Tver Oblast on 3 December 2018, and the switchover was completed on 14 October 2019. During the transition, DVB-T2 receivers and monetary compensations for purchasing of terrestrial or satellite digital TV reception equipment were provided to disabled people, World War II veterans, certain categories of retirees and households with income per member below living wage. See also Amateur television Narrow-bandwidth television Overscan Slow-scan television Terrestrial television Television transmitter Vertical blanking interval Field (video) Video frame Glossary of video terms Notes References External links Video signal measurement and generation Television synchronisation Video broadcast standard frequencies and country listings EDN magazine describing design of a 1958 transistorised television receiver Designing the color television signal in the early 1950s as described by two engineers working directly with the NTSC Television technology Television terminology
2396
https://en.wikipedia.org/wiki/Adhesive
Adhesive
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation. The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, or welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin. Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared in approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present. History Evidence of the earliest known use of adhesives was discovered in central Italy when two stone flakes partially covered with birch-bark tar and a third uncovered stone from the Middle Pleistocene era (circa 200,000 years ago) were found. This is thought to be the oldest discovered human use of tar-hafted stones. The birch-bark-tar adhesive is a simple, one-component adhesive. A study from 2019 showed that birch tar production can be a very simple process—merely involving the burning of birch bark near smooth vertical surfaces in open air conditions. Although sticky enough, plant-based adhesives are brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools. More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC. In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis. The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum. From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships. In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue. In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue. The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867. Natural rubber was first used as material for adhesives starting in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding. Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born. Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA). A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins. The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used. Types Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase. By reactiveness Non-reactive Drying There are two types of adhesives that harden by drying: solvent-based adhesives and polymer dispersion adhesives, also known as emulsion adhesives. Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the drying adhesive family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees. Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones. Pressure-sensitive Pressure-sensitive adhesives (PSA) form a bond by the application of light pressure to marry the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength. PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days. Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and transdermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes. Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers. Contact Contact adhesives are used in strong bonds with high shear-resistance like laminates, such as bonding Formica to a wooden counter, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization. Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. It is usually not necessary to apply pressure for a long time, so there is less need for clamps. Hot Hot adhesives, also known as hot melt adhesives, are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies. Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. However, water-based adhesives are still of strong interest as they typically do not contain volatile solvents. Reactive Anaerobic Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid. Multi-part Multi-component adhesives harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies . There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are: Polyester resin & polyurethane resin Polyols & polyurethane resin Acrylic polymers & polyurethane resins The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process. Pre-mixed and frozen adhesives Pre-mixed and frozen adhesives (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense. One-part One-part adhesives harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture. Ultraviolet (UV) light curing adhesives, also known as light curing materials (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based. Heat curing adhesives consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides. Moisture curing adhesives cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes. By origin Natural Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives. One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins. Synthetic Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s. Application Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles. Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns (e.g., caulk gun). All of these can be used manually or automated as part of a machine. Mechanisms of adhesion For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered. Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs. In some cases, an actual chemical bond occurs between adhesive and substrate. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening. Methods to improve adhesion The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming. Failure There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following: Cohesive fracture Cohesive fracture is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface". Adhesive fracture Adhesive fracture (sometimes referred to as interfacial fracture) is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness. Other types of fracture Other types of fracture include: The mixed type, which occurs if the crack propagates at some spots in a cohesive and in others in an interfacial manner. Mixed fracture surfaces can be characterised by a certain percentage of adhesive and cohesive areas. The alternating crack path type which occurs if the cracks jump from one interface to the other. This type of fracture appears in the presence of tensile pre-stresses in the adhesive layer. Fracture can also occur in the adherend if the adhesive is tougher than the adherend. In this case, the adhesive remains intact and is still bonded to one substrate and remnants of the other. For example, when one removes a price label, the adhesive usually remains on the label and the surface. This is cohesive failure. If, however, a layer of paper remains stuck to the surface, the adhesive has not failed. Another example is when someone tries to pull apart Oreo cookies and all the filling remains on one side; this is an adhesive failure, rather than a cohesive failure. Design of adhesive joints As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered. Failure will also very much depend on the opening mode of the joint. Mode I is an opening or tensile mode where the loadings are normal to the crack. Mode II is a sliding or in-plane shear mode where the crack surfaces slide over one another in direction perpendicular to the leading edge of the crack. This is typically the mode for which the adhesive exhibits the highest resistance to fracture. Mode III is a tearing or antiplane shear mode. As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry. Increasing the joint resistance is usually obtained by designing its geometry so that: The bonded zone is large It is mainly loaded in mode II Stable crack propagation will follow the appearance of a local failure. Shelf life Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor. See also Impact glue References Bibliography Kinloch, Anthony J. (1987). Adhesion and Adhesives: Science and Technology. London: Chapman and Hall. External links Educational portal on adhesives and sealants RoyMech: The theory of adhesive bonding 3M's Adhesive & Tapes Classification Database of adhesives for attaching different materials Visual arts materials 1750 introductions Packaging materials
2400
https://en.wikipedia.org/wiki/AMD
AMD
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets. The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009. AMD's main products include microprocessors, motherboard chipsets, embedded processors, graphics processors, and FPGAs for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center and gaming markets, and has announced plans to enter the high-performance computing market. History First twelve years Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968. In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid. In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available. In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million. AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09. Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976. In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors. Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation. Technology exchange agreement with Intel Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled. Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips. The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985. By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips. AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel. AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO. Acquisition of ATI, spin-off of GlobalFoundries, and acquisition of Xilinx On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for a total of approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name. In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009. In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue. AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip. On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June. On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014. After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments. In October 2020, AMD announced that it was acquiring Xilinx in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion. In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem. List of CEOs Products CPUs and APUs IBM PC and the x86 architecture In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD. In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units. In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor. Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors. K5, K6, Athlon, Duron, and Sempron AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked. In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor). The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64KB instead of 256KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3. On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512KB L2 Cache was released. Athlon 64, Opteron and Phenom The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64. On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment. In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, as well as an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm. In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, as well as a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform. In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed. The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process. Fusion becomes the AMD APU Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit). Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, as well as northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card. New microarchitectures High-power, high-performance Bulldozer cores Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace. The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism. In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency. Low-power Cat cores The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W. Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014. ARM architecture-based designs In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57 based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release. In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred. Zen-based CPUs and APUs Zen is a new architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015. One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient. The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory. AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020. As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture. The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs. Graphics products and GPUs ATI prior to AMD acquisition Radeon within AMD In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014. Combined GPU and CPU divisions In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017. Radeon Technologies Group In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities. In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs. Semi-custom and game console products In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory. Other hardware AMD motherboard chipsets Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia. The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors. As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI. On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform. AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS–based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX). With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality. AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia. Embedded products Embedded CPUs In the early 1990s, AMD began marketing a series of embedded System-on-a-chip (SoC) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHze.g. was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series. In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications. In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, as well as the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015. AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability. The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom. In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces. In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016. In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system on a chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory. Embedded graphics AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology. Current product lines CPU and APU products AMD's portfolio of CPUs and APUs Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen) A-series – Excavator-class consumer desktop and laptop APUs G-series – Excavator- and Jaguar-class low-power embedded APUs Ryzen – brand of consumer CPUs and APUs Ryzen Threadripper – brand of prosumer/professional CPUs R-series – Excavator class high-performance embedded APUs Epyc – brand of server CPUs Opteron – brand of microserver APUs Graphics products AMD's portfolio of dedicated graphics processors Radeon – brand for consumer line of graphics cards; the brand name originated with ATI. Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops. Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand. Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products Radeon-branded products RAM In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business. Solid-state drives AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface. Technologies CPU hardware technologies found in AMD CPU/APU and other products include: HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture Graphics hardware technologies found in AMD GPU products include: AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard AMD TrueAudio – acceleration of audio calculations AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3 AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs Unified Video Decoder (UVD) – acceleration of video decompression (decoding) Video Coding Engine (VCE) – acceleration of video compression (encoding) Software AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade. For the following mentions, software not expressely stated free can be assumed to be proprietary. Distribution AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux. Software by type CPU AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux. AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows. AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future. GPU Most notable public AMD software is on the GPU side. AMD has opened both its graphic and compute stacks: GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution. ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks. Misc AMD conducts open research on heterogeneous computing. Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library. AMD contributes to open source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community. In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards. Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5. Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture. Production and fabrication Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication. In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009. With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past. In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021. Corporate affairs Partnerships AMD uses strategic industry partnerships to further its business interests as well as to rival Intel's dominance and resources: A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors. AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies. To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft. In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory. On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup. In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well. In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs as well as increased support for their products across the board. AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India. AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks. AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market. On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021. On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023. In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony. On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse. In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture. Litigation with Intel AMD has a long history of litigation with former (and current) partner and x86 creator Intel. In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract. In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation. In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor. In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba. In November 2009, Intel agreed to pay AMD $1.25bn and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them. Guinness World Record achievement On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core). On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz. On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz. Acquisitions, mergers and investments Corporate social responsibility In its 2012 report on progress relating to conflict minerals, the Enough Project rated AMD the fifth most progressive of 24 consumer electronics companies. Other initiatives 50x15, digital inclusion, with targeted 50% of world population to be connected through Internet via affordable computers by the year of 2015. The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids. See also Bill Gaede List of AMD processors List of AMD accelerated processing units List of AMD graphics processing units List of AMD chipsets List of ATI chipsets 3DNow! Cool'n'Quiet PowerNow! Notes References Rodengen, Jeffrey L. The Spirit of AMD: Advanced Micro Devices. Write Stuff, 1998. Ruiz, Hector. Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group, 2013. External links 1969 establishments in California 1970s initial public offerings American companies established in 1969 Fabless semiconductor companies Companies based in Santa Clara, California Companies formerly listed on the New York Stock Exchange Companies listed on the Nasdaq Companies in the Nasdaq-100 Computer companies of the United States Computer companies established in 1969 Electronics companies established in 1969 Graphics hardware companies HSA Foundation founding members Manufacturing companies based in the San Francisco Bay Area Motherboard companies Semiconductor companies of the United States Superfund sites in California Technology companies based in the San Francisco Bay Area Technology companies established in 1969
2402
https://en.wikipedia.org/wiki/Albrecht%20D%C3%BCrer
Albrecht Dürer
Albrecht Dürer (; ; 21 May 1471 – 6 April 1528), sometimes spelled in English as Durer, was a German painter, printmaker, and theorist of the German Renaissance. Born in Nuremberg, Dürer established his reputation and influence across Europe in his twenties due to his high-quality woodcut prints. He was in contact with the major Italian artists of his time, including Raphael, Giovanni Bellini, and Leonardo da Vinci, and from 1512 was patronized by Emperor Maximilian I. Dürer's vast body of work includes engravings, his preferred technique in his later prints, altarpieces, portraits and self-portraits, watercolours and books. The woodcuts series are more Gothic than the rest of his work. His well-known engravings include the three Meisterstiche (master prints) Knight, Death and the Devil (1513), Saint Jerome in his Study (1514), and Melencolia I (1514). His watercolours mark him as one of the first European landscape artists, while his woodcuts revolutionised the potential of that medium. Dürer's introduction of classical motifs into Northern art, through his knowledge of Italian artists and German humanists, has secured his reputation as one of the most important figures of the Northern Renaissance. This is reinforced by his theoretical treatises, which involve principles of mathematics, perspective, and ideal proportions. Biography Early life (1471–1490) Dürer was born on 21 May 1471, the third child and second son of Albrecht Dürer the Elder and Barbara Holper, who married in 1467 and had eighteen children together. Albrecht Dürer the Elder (originally Albrecht Ajtósi) was a successful goldsmith who by 1455 had moved to Nuremberg from Ajtós, near Gyula in Hungary. He married Holper, his master's daughter, when he himself qualified as a master. One of Albrecht's brothers, Hans Dürer, was also a painter and trained under him. Her mother had some roots in Hungary to, Kinga Öllinger was born in Sopron. Another of Albrecht's brothers, Endres Dürer, took over their father's business and was a master goldsmith. The German name "Dürer" is a translation from the Hungarian, "Ajtósi". Initially, it was "Türer", meaning doormaker, which is "ajtós" in Hungarian (from "ajtó", meaning door). A door is featured in the coat-of-arms the family acquired. Albrecht Dürer the Younger later changed "Türer", his father's diction of the family's surname, to "Dürer", to adapt to the local Nuremberg dialect. Dürer's godfather Anton Koberger left goldsmithing to become a printer and publisher in the year of Dürer's birth. He became the most successful publisher in Germany, eventually owning twenty-four printing-presses and a number of offices in Germany and abroad. Koberger's most famous publication was the Nuremberg Chronicle, published in 1493 in German and Latin editions. It contained an unprecedented 1,809 woodcut illustrations (albeit with many repeated uses of the same block) by the Wolgemut workshop. Dürer may have worked on some of these, as the work on the project began while he was with Wolgemut. Because Dürer left autobiographical writings and was widely known by his mid-twenties, his life is well documented in several sources. After a few years of school, Dürer learned the basics of goldsmithing and drawing from his father. Though his father wanted him to continue his training as a goldsmith, he showed such a precocious talent in drawing that he started as an apprentice to Michael Wolgemut at the age of fifteen in 1486. A self-portrait, a drawing in silverpoint, is dated 1484 (Albertina, Vienna) "when I was a child", as his later inscription says. The drawing is one of the earliest surviving children's drawings of any kind, and, as Dürer's Opus One, has helped define his oeuvre as deriving from, and always linked to, himself. Wolgemut was the leading artist in Nuremberg at the time, with a large workshop producing a variety of works of art, in particular woodcuts for books. Nuremberg was then an important and prosperous city, a centre for publishing and many luxury trades. It had strong links with Italy, especially Venice, a relatively short distance across the Alps. Wanderjahre and marriage (1490–1494) After completing his apprenticeship, Dürer followed the common German custom of taking Wanderjahre—in effect gap years—in which the apprentice learned skills from artists in other areas; Dürer was to spend about four years away. He left in 1490, possibly to work under Martin Schongauer, the leading engraver of Northern Europe, but who died shortly before Dürer's arrival at Colmar in 1492. It is unclear where Dürer travelled in the intervening period, though it is likely that he went to Frankfurt and the Netherlands. In Colmar, Dürer was welcomed by Schongauer's brothers, the goldsmiths Caspar and Paul and the painter Ludwig. Later that year, Dürer travelled to Basel to stay with another brother of Martin Schongauer, the goldsmith Georg. In 1493 Dürer went to Strasbourg, where he would have experienced the sculpture of Nikolaus Gerhaert. Dürer's first painted self-portrait (now in the Louvre) was painted at this time, probably to be sent back to his fiancée in Nuremberg. Very soon after his return to Nuremberg, on 7 July 1494, at the age of 23, Dürer was married to Agnes Frey following an arrangement made during his absence. Agnes was the daughter of a prominent brass worker (and amateur harpist) in the city. However, no children resulted from the marriage, and with Albrecht the Dürer name died out. The marriage between Agnes and Albrecht was not a generally happy one, as indicated by the letters of Dürer in which he quipped to Willibald Pirckheimer in an extremely rough tone about his wife. He called her an "old crow" and made other vulgar remarks. Pirckheimer also made no secret of his antipathy towards Agnes, describing her as a miserly shrew with a bitter tongue, who helped cause Dürer's death at a young age. It has been hypothesized by many scholars that Albrecht was bisexual or homosexual, due to the recurrence of homoerotic themes in his works (e.g. The Men's Bath), and the nature of his correspondence with close friends. First journey to Italy (1494–1495) Within three months of his marriage, Dürer left for Italy, alone, perhaps stimulated by an outbreak of plague in Nuremberg. He made watercolour sketches as he traveled over the Alps. Some have survived and others may be deduced from accurate landscapes of real places in his later work, for example his engraving Nemesis. In Italy, he went to Venice to study its more advanced artistic world. Through Wolgemut's tutelage, Dürer had learned how to make prints in drypoint and design woodcuts in the German style, based on the works of Schongauer and the Housebook Master. He also would have had access to some Italian works in Germany, but the two visits he made to Italy had an enormous influence on him. He wrote that Giovanni Bellini was the oldest and still the best of the artists in Venice. His drawings and engravings show the influence of others, notably Antonio del Pollaiuolo, with his interest in the proportions of the body; Lorenzo di Credi; and Andrea Mantegna, whose work he produced copies of while training. Dürer probably also visited Padua and Mantua on this trip. Return to Nuremberg (1495–1505) On his return to Nuremberg in 1495, Dürer opened his own workshop (being married was a requirement for this). Over the next five years, his style increasingly integrated Italian influences into underlying Northern forms. Arguably his best works in the first years of the workshop were his woodcut prints, mostly religious, but including secular scenes such as The Men's Bath House (). These were larger and more finely cut than the great majority of German woodcuts hitherto, and far more complex and balanced in composition. It is now thought unlikely that Dürer cut any of the woodblocks himself; this task would have been performed by a specialist craftsman. However, his training in Wolgemut's studio, which made many carved and painted altarpieces and both designed and cut woodblocks for woodcut, evidently gave him great understanding of what the technique could be made to produce, and how to work with block cutters. Dürer either drew his design directly onto the woodblock itself, or glued a paper drawing to the block. Either way, his drawings were destroyed during the cutting of the block. His series of sixteen designs for the Apocalypse is dated 1498, as is his engraving of St. Michael Fighting the Dragon. He made the first seven scenes of the Great Passion in the same year, and a little later, a series of eleven on the Holy Family and saints. The Seven Sorrows Polyptych, commissioned by Frederick III of Saxony in 1496, was executed by Dürer and his assistants c. 1500. In 1502, Dürer's father died. Around 1503–1505 Dürer produced the first 17 of a set illustrating the Life of the Virgin, which he did not finish for some years. Neither these nor the Great Passion were published as sets until several years later, but prints were sold individually in considerable numbers. During the same period Dürer trained himself in the difficult art of using the burin to make engravings. It is possible he had begun learning this skill during his early training with his father, as it was also an essential skill of the goldsmith. In 1496 he executed the Prodigal Son, which the Italian Renaissance art historian Giorgio Vasari singled out for praise some decades later, noting its Germanic quality. He was soon producing some spectacular and original images, notably Nemesis (1502), The Sea Monster (1498), and Saint Eustace (), with a highly detailed landscape background and animals. His landscapes of this period, such as Pond in the Woods and Willow Mill, are quite different from his earlier watercolours. There is a much greater emphasis on capturing atmosphere, rather than depicting topography. He made a number of Madonnas, single religious figures, and small scenes with comic peasant figures. Prints are highly portable and these works made Dürer famous throughout the main artistic centres of Europe within a very few years. The Venetian artist Jacopo de' Barbari, whom Dürer had met in Venice, visited Nuremberg in 1500, and Dürer said that he learned much about the new developments in perspective, anatomy, and proportion from him. De' Barbari was unwilling to explain everything he knew, so Dürer began his own studies, which would become a lifelong preoccupation. A series of extant drawings show Dürer's experiments in human proportion, leading to the famous engraving of Adam and Eve (1504), which shows his subtlety while using the burin in the texturing of flesh surfaces. This is the only existing engraving signed with his full name. Dürer created large numbers of preparatory drawings, especially for his paintings and engravings, and many survive, most famously the Betende Hände (Praying Hands) from circa 1508, a study for an apostle in the Heller altarpiece. He continued to make images in watercolour and bodycolour (usually combined), including a number of still lifes of meadow sections or animals, including his Young Hare (1502) and the Great Piece of Turf (1503). Second journey to Italy (1505–1507) In Italy, he returned to painting, at first producing a series of works executed in tempera on linen. These include portraits and altarpieces, notably, the Paumgartner altarpiece and the Adoration of the Magi. In early 1506, he returned to Venice and stayed there until the spring of 1507. By this time Dürer's engravings had attained great popularity and were being copied. In Venice he was given a valuable commission from the emigrant German community for the church of San Bartolomeo. This was the altar-piece known as the Adoration of the Virgin or the Feast of Rose Garlands. It includes portraits of members of Venice's German community, but shows a strong Italian influence. It was later acquired by the Emperor Rudolf II and taken to Prague. Nuremberg and the masterworks (1507–1520) Despite the regard in which he was held by the Venetians, Dürer returned to Nuremberg by mid-1507, remaining in Germany until 1520. His reputation had spread throughout Europe and he was on friendly terms and in communication with most of the major artists including Raphael. Between 1507 and 1511 Dürer worked on some of his most celebrated paintings: Adam and Eve (1507), Martyrdom of the Ten Thousand (1508, for Frederick of Saxony), Virgin with the Iris (1508), the altarpiece Assumption of the Virgin (1509, for Jacob Heller of Frankfurt), and Adoration of the Trinity (1511, for Matthaeus Landauer). During this period he also completed two woodcut series, the Great Passion and the Life of the Virgin, both published in 1511 together with a second edition of the Apocalypse series. The post-Venetian woodcuts show Dürer's development of chiaroscuro modelling effects, creating a mid-tone throughout the print to which the highlights and shadows can be contrasted. Other works from this period include the thirty-seven Little Passion woodcuts, first published in 1511, and a set of fifteen small engravings on the same theme in 1512. Complaining that painting did not make enough money to justify the time spent when compared to his prints, he produced no paintings from 1513 to 1516. In 1513 and 1514 Dürer created his three most famous engravings: Knight, Death and the Devil (1513, probably based on Erasmus's Handbook of a Christian Knight), St. Jerome in His Study, and the much-debated Melencolia I (both 1514, the year Dürer's mother died). Further outstanding pen and ink drawings of Dürer's period of art work of 1513 were drafts for his friend Pirckheimer. These drafts were later used to design Lusterweibchen chandeliers, combining an antler with a wooden sculpture. In 1515, he created his woodcut of a Rhinoceros which had arrived in Lisbon from a written description and sketch by another artist, without ever seeing the animal himself. An image of the Indian rhinoceros, the image has such force that it remains one of his best-known and was still used in some German school science text-books as late as last century. In the years leading to 1520 he produced a wide range of works, including the woodblocks for the first western printed star charts in 1515 and portraits in tempera on linen in 1516. His only experiments with etching came in this period, producing five between 1515–1516 and a sixth in 1518; a technique he may have abandoned as unsuited to his aesthetic of methodical, classical form. Patronage of Maximilian I From 1512, Maximilian I became Dürer's major patron. He commissioned The Triumphal Arch, a vast work printed from 192 separate blocks, the symbolism of which is partly informed by Pirckheimer's translation of Horapollo's Hieroglyphica. The design program and explanations were devised by Johannes Stabius, the architectural design by the master builder and court-painter Jörg Kölderer and the woodcutting itself by Hieronymous Andreae, with Dürer as designer-in-chief. The Arch was followed by The Triumphal Procession, the program of which was worked out in 1512 by Marx Treitz-Saurwein and includes woodcuts by Albrecht Altdorfer and Hans Springinklee, as well as Dürer. Dürer worked with pen on the marginal images for an edition of the Emperor's printed Prayer-Book; these were quite unknown until facsimiles were published in 1808 as part of the first book published in lithography. Dürer's work on the book was halted for an unknown reason, and the decoration was continued by artists including Lucas Cranach the Elder and Hans Baldung. Dürer also made several portraits of the Emperor, including one shortly before Maximilian's death in 1519. Maximilian was a very cash-strapped prince who sometimes failed to pay, yet turned out to be Dürer's most important patron. In his court, artists and learned men were respected, which was not common at that time (later, Dürer commented that in Germany, as a non-noble, he was treated as a parasite). Pirckheimer (who he met in 1495, before entering the service of Maximilian) was also an important personage in the court and great cultural patron, who had a strong influence on Dürer as his tutor in classical knowledge and humanistic critical methodology, as well as collaborator. In Maximilian's court, Dürer also collaborated with a great number of other brilliant artists and scholars of the time who became his friends, like Johannes Stabius, Konrad Peutinger, Conrad Celtes, and Hans Tscherte (an imperial architect). Dürer manifested a strong pride in his ability, as a prince of his profession. One day, the emperor, trying to show Dürer an idea, tried to sketch with the charcoal himself, but always broke it. Dürer took the charcoal from Maximilian's hand, finished the drawing and told him: "This is my scepter." In another occasion, Maximilian noticed that the ladder Dürer used was too short and unstable, thus told a noble to hold it for him. The noble refused, saying that it was beneath him to serve a non-noble. Maximilian then came to hold the ladder himself, and told the noble that he could make a noble out of a peasant any day, but he could not make an artist like Dürer out of a noble. This story and a 1849 painting depicting it by have become relevant recently. This nineteenth-century painting shows Dürer painting a mural at St. Stephen's Cathedral, Vienna. Apparently, this reflects a seventeenth-century "artists' legend" about the previously mentioned encounter (in which the emperor held the ladder) – that this encounter corresponds with the period Dürer was working on the Viennese murals. In 2020, during restoration work, art connoisseurs discovered a piece of handwriting now attributed to Dürer, suggesting that the Nuremberg master had actually participated in creating the murals at St. Stephen's Cathedral. In the recent 2022 Dürer exhibition in Nuremberg (in which the drawing technique is also traced and connected to Dürer's other works), the identity of the commissioner is discussed. Now the painting of Siegert (and the legend associated with it) is used as evidence to suggest that this was Maximilian. Dürer is historically recorded to have entered the emperor's service in 1511, and the mural's date is calculated to be around 1505, but it is possible they have known and worked with each other earlier than 1511. Cartographic and astronomical works Dürer's exploration of space led to a relationship and cooperation with the court astronomer Johannes Stabius. Stabius also often acted as Dürer's and Maximilian's go-between for their financial problems. In 1515 Dürer and Stabius created the first world map projected on a solid geometric sphere. Also in 1515, Stabius, Dürer and the astronomer Konrad Heinfogel produced the first planispheres of both southern and northerns hemispheres, as well as the first printed celestial maps, which prompted the revival of interest in the field of uranometry throughout Europe. Journey to the Netherlands (1520–1521) Maximilian's death came at a time when Dürer was concerned he was losing "my sight and freedom of hand" (perhaps caused by arthritis) and increasingly affected by the writings of Martin Luther. In July 1520 Dürer made his fourth and last major journey, to renew the Imperial pension Maximilian had given him and to secure the patronage of the new emperor, Charles V, who was to be crowned at Aachen. Dürer journeyed with his wife and her maid via the Rhine to Cologne and then to Antwerp, where he was well received and produced numerous drawings in silverpoint, chalk and charcoal. In addition to attending the coronation, he visited Cologne (where he admired the painting of Stefan Lochner), Nijmegen, 's-Hertogenbosch, Bruges (where he saw Michelangelo's Madonna of Bruges), Ghent (where he admired van Eyck's Ghent altarpiece), and Zeeland. Dürer took a large stock of prints with him and wrote in his diary to whom he gave, exchanged or sold them, and for how much. This provides rare information of the monetary value placed on prints at this time. Unlike paintings, their sale was very rarely documented. While providing valuable documentary evidence, Dürer's Netherlandish diary also reveals that the trip was not a profitable one. For example, Dürer offered his last portrait of Maximilian to his daughter, Margaret of Austria, but eventually traded the picture for some white cloth after Margaret disliked the portrait and declined to accept it. During this trip he also met Bernard van Orley, Jan Provoost, Gerard Horenbout, Jean Mone, Joachim Patinir and Tommaso Vincidor, though he did not, it seems, meet Quentin Matsys. Having secured his pension, Dürer returned home in July 1521, having caught an undetermined illness, which afflicted him for the rest of his life, and greatly reduced his rate of work. Final years, Nuremberg (1521–1528) On his return to Nuremberg, Dürer worked on a number of grand projects with religious themes, including a crucifixion scene and a Sacra conversazione, though neither was completed. This may have been due in part to his declining health, but perhaps also because of the time he gave to the preparation of his theoretical works on geometry and perspective, the proportions of men and horses, and fortification. However, one consequence of this shift in emphasis was that during the last years of his life, Dürer produced comparatively little as an artist. In painting, there was only a portrait of Hieronymus Holtzschuher, a Madonna and Child (1526), Salvator Mundi (1526), and two panels showing St. John with St. Peter in background and St. Paul with St. Mark in the background. This last great work, the Four Apostles, was given by Dürer to the City of Nuremberg—although he was given 100 guilders in return. As for engravings, Dürer's work was restricted to portraits and illustrations for his treatise. The portraits include Cardinal-Elector Albert of Mainz; Frederick the Wise, elector of Saxony; the humanist scholar Willibald Pirckheimer; Philipp Melanchthon, and Erasmus of Rotterdam. For those of the Cardinal, Melanchthon, and Dürer's final major work, a drawn portrait of the Nuremberg patrician Ulrich Starck, Dürer depicted the sitters in profile. Despite complaining of his lack of a formal classical education, Dürer was greatly interested in intellectual matters and learned much from his boyhood friend Willibald Pirckheimer, whom he no doubt consulted on the content of many of his images. He also derived great satisfaction from his friendships and correspondence with Erasmus and other scholars. Dürer succeeded in producing two books during his lifetime. The Four Books on Measurement were published at Nuremberg in 1525 and was the first book for adults on mathematics in German, as well as being cited later by Galileo and Kepler. The other, a work on city fortifications, was published in 1527. The Four Books on Human Proportion were published posthumously, shortly after his death in 1528. Dürer died in Nuremberg at the age of 56, leaving an estate valued at 6,874 florins – a considerable sum. He is buried in the Johannisfriedhof cemetery. His large house (purchased in 1509 from the heirs of the astronomer Bernhard Walther), where his workshop was located and where his widow lived until her death in 1539, remains a prominent Nuremberg landmark. Dürer and the Reformation Dürer's writings suggest that he may have been sympathetic to Luther's ideas, though it is unclear if he ever left the Catholic Church. Dürer wrote of his desire to draw Luther in his diary in 1520: "And God help me that I may go to Dr. Martin Luther; thus I intend to make a portrait of him with great care and engrave him on a copper plate to create a lasting memorial of the Christian man who helped me overcome so many difficulties." In a letter to Nicholas Kratzer in 1524, Dürer wrote, "because of our Christian faith we have to stand in scorn and danger, for we are reviled and called heretics". Most tellingly, Pirckheimer wrote in a letter to Johann Tscherte in 1530: "I confess that in the beginning I believed in Luther, like our Albert of blessed memory ... but as anyone can see, the situation has become worse." Dürer may even have contributed to the Nuremberg City Council's mandating Lutheran sermons and services in March 1525. Notably, Dürer had contacts with various reformers, such as Zwingli, Andreas Karlstadt, Melanchthon, Erasmus and Cornelius Grapheus from whom Dürer received Luther's Babylonian Captivity in 1520. Yet Erasmus and C. Grapheus are better said to be Catholic change agents. Also, from 1525, "the year that saw the peak and collapse of the Peasants' War, the artist can be seen to distance himself somewhat from the [Lutheran] movement..." Dürer's later works have also been claimed to show Protestant sympathies. His 1523 The Last Supper woodcut has often been understood to have an evangelical theme, focusing as it does on Christ espousing the Gospel, as well as the inclusion of the Eucharistic cup, an expression of Protestant utraquism, although this interpretation has been questioned. The delaying of the engraving of St Philip, completed in 1523 but not distributed until 1526, may have been due to Dürer's uneasiness with images of saints; even if Dürer was not an iconoclast, in his last years he evaluated and questioned the role of art in religion. Legacy and influence Dürer exerted a huge influence on the artists of succeeding generations, especially in printmaking, the medium through which his contemporaries mostly experienced his art, as his paintings were predominantly in private collections located in only a few cities. His success in spreading his reputation across Europe through prints was undoubtedly an inspiration for major artists such as Raphael, Titian, and Parmigianino, all of whom collaborated with printmakers to promote and distribute their work. His engravings seem to have had an intimidating effect upon his German successors; the "Little Masters" who attempted few large engravings but continued Dürer's themes in small, rather cramped compositions. Lucas van Leyden was the only Northern European engraver to successfully continue to produce large engravings in the first third of the 16th century. The generation of Italian engravers who trained in the shadow of Dürer all either directly copied parts of his landscape backgrounds (Giulio Campagnola, Giovanni Battista Palumba, Benedetto Montagna and Cristofano Robetta), or whole prints (Marcantonio Raimondi and Agostino Veneziano). However, Dürer's influence became less dominant after 1515, when Marcantonio perfected his new engraving style, which in turn travelled over the Alps to also dominate Northern engraving. In painting, Dürer had relatively little influence in Italy, where probably only his altarpiece in Venice was seen, and his German successors were less effective in blending German and Italian styles. His intense and self-dramatizing self-portraits have continued to have a strong influence up to the present, especially on painters in the 19th and 20th century who desired a more dramatic portrait style. Dürer has never fallen from critical favour, and there have been significant revivals of interest in his works in Germany in the Dürer Renaissance of about 1570 to 1630, in the early nineteenth century, and in German nationalism from 1870 to 1945. The Lutheran Church commemorates Dürer annually on 6 April, along with Michelangelo, Lucas Cranach the Elder and Hans Burgkmair. Theoretical works In all his theoretical works, in order to communicate his theories in the German language rather than in Latin, Dürer used graphic expressions based on a vernacular, craftsmen's language. For example, "Schneckenlinie" ("snail-line") was his term for a spiral form. Thus, Dürer contributed to the expansion in German prose which Luther had begun with his translation of the Bible. Four Books on Measurement Dürer's work on geometry is called the Four Books on Measurement (Underweysung der Messung mit dem Zirckel und Richtscheyt or Instructions for Measuring with Compass and Ruler). The first book focuses on linear geometry. Dürer's geometric constructions include helices, conchoids and epicycloids. He also draws on Apollonius, and Johannes Werner's 'Libellus super viginti duobus elementis conicis' of 1522. The second book moves onto two-dimensional geometry, i.e. the construction of regular polygons. Here Dürer favours the methods of Ptolemy over Euclid. The third book applies these principles of geometry to architecture, engineering and typography. In architecture Dürer cites Vitruvius but elaborates his own classical designs and columns. In typography, Dürer depicts the geometric construction of the Latin alphabet, relying on Italian precedent. However, his construction of the Gothic alphabet is based upon an entirely different modular system. The fourth book completes the progression of the first and second by moving to three-dimensional forms and the construction of polyhedra. Here Dürer discusses the five Platonic solids, as well as seven Archimedean semi-regular solids, as well as several of his own invention. Four Books on Human Proportion Dürer's work on human proportions is called the Four Books on Human Proportion (Vier Bücher von Menschlicher Proportion) of 1528. The first book was mainly composed by 1512/13 and completed by 1523, showing five differently constructed types of both male and female figures, all parts of the body expressed in fractions of the total height. Dürer based these constructions on both Vitruvius and empirical observations of "two to three hundred living persons", in his own words. The second book includes eight further types, broken down not into fractions but an Albertian system, which Dürer probably learned from Francesco di Giorgio's of 1525. In the third book, Dürer gives principles by which the proportions of the figures can be modified, including the mathematical simulation of convex and concave mirrors; here Dürer also deals with human physiognomy. The fourth book is devoted to the theory of movement. Appended to the last book, however, is a self-contained essay on aesthetics, which Dürer worked on between 1512 and 1528, and it is here that we learn of his theories concerning 'ideal beauty'. Dürer rejected Alberti's concept of an objective beauty, proposing a relativist notion of beauty based on variety. Nonetheless, Dürer still believed that truth was hidden within nature, and that there were rules which ordered beauty, even though he found it difficult to define the criteria for such a code. In 1512/13 his three criteria were function ('Nutz'), naïve approval ('Wohlgefallen') and the happy medium ('Mittelmass'). However, unlike Alberti and Leonardo, Dürer was most troubled by understanding not just the abstract notions of beauty but also as to how an artist can create beautiful images. Between 1512 and the final draft in 1528, Dürer's belief developed from an understanding of human creativity as spontaneous or inspired to a concept of 'selective inward synthesis'. In other words, that an artist builds on a wealth of visual experiences in order to imagine beautiful things. Dürer's belief in the abilities of a single artist over inspiration prompted him to assert that "one man may sketch something with his pen on half a sheet of paper in one day, or may cut it into a tiny piece of wood with his little iron, and it turns out to be better and more artistic than another's work at which its author labours with the utmost diligence for a whole year". Book on Fortification In 1527, Dürer also published Various Lessons on the Fortification of Cities, Castles, and Localities (Etliche Underricht zu Befestigung der Stett, Schloss und Flecken). It was printed in Nuremberg, probably by Hieronymus Andreae and reprinted in 1603 by Johan Janssenn in Arnhem. In 1535 it was also translated into Latin as On Cities, Forts, and Castles, Designed and Strengthened by Several Manners: Presented for the Most Necessary Accommodation of War (De vrbibus, arcibus, castellisque condendis, ac muniendis rationes aliquot : praesenti bellorum necessitati accommodatissimae), published by Christian Wechel (Wecheli/Wechelus) in Paris. The work is less proscriptively theoretical than his other works, and was soon overshadowed by the Italian theory of polygonal fortification (the trace italienne – see Bastion fort), though his designs seem to have had some influence in the eastern German lands and up into the Baltic region. Fencing Dürer created many sketches and woodcuts of soldiers and knights over the course of his life. His most significant martial works, however, were made in 1512 as part of his efforts to secure the patronage of Maximilian I. Using existing manuscripts from the Nuremberg Group as his reference, his workshop produced the extensive Οπλοδιδασκαλια sive Armorvm Tractandorvm Meditatio Alberti Dvreri ("Weapon Training, or Albrecht Dürer's Meditation on the Handling of Weapons", MS 26-232). Another manuscript based on the Nuremberg texts as well as one of Hans Talhoffer's works, the untitled Berlin Picture Book (Libr.Pict.A.83), is also thought to have originated in his workshop around this time. These sketches and watercolors show the same careful attention to detail and human proportion as Dürer's other work, and his illustrations of grappling, long sword, dagger, and messer are among the highest-quality in any fencing manual. Gallery List of works List of paintings by Albrecht Dürer List of engravings by Albrecht Dürer List of woodcuts by Albrecht Dürer References Notes Citations Sources Bartrum, Giulia. Albrecht Dürer and his Legacy. London: British Museum Press, 2002. Brand Philip, Lotte; Anzelewsky, Fedja. "The Portrait Diptych of Dürer's parents". Simiolus: Netherlands Quarterly for the History of Art, Volume 10, No. 1, 1978–79. 5–18 Brion, Marcel. Dürer. London: Thames and Hudson, 1960 Harbison, Craig. "Dürer and the Reformation: The Problem of the Re-dating of the St. Philip Engraving". The Art Bulletin, Vol. 58, No. 3, 368–373. September 1976 Koerner, Joseph Leo. The Moment of Self-Portraiture in German Renaissance Art. Chicago/London: University of Chicago Press, 1993. Landau David; Parshall, Peter. The Renaissance Print. Yale, 1996. Panofsky, Erwin. The Life and Art of Albrecht Dürer. NJ: Princeton, 1945. Price, David Hotchkiss. Albrecht Dürer's Renaissance: Humanism, Reformation and the Art of Faith. Michigan, 2003. . Strauss, Walter L. (ed.). The Complete Engravings, Etchings and Drypoints of Albrecht Durer. Mineola NY: Dover Publications, 1973. Borchert, Till-Holger. Van Eyck to Dürer: The Influence of Early Netherlandish painting on European Art, 1430–1530. London: Thames & Hudson, 2011. Wolf, Norbert. Albrecht Dürer. Taschen, 2010. Further reading Campbell Hutchison, Jane. Albrecht Dürer: A Biography. Princeton University Press, 1990. Demele, Christine. Dürers Nacktheit – Das Weimarer Selbstbildnis. Rhema Verlag, Münster 2012, Dürer, Albrecht (translated by R.T. Nichol from the Latin text), Of the Just Shaping of Letters, Dover Publications. Hart, Vaughan. 'Navel Gazing. On Albrecht Dürer's Adam and Eve (1504)', The International Journal of Arts Theory and History, 2016, vol.12.1 pp. 1–10 https://doi.org/10.18848/2326-9960/CGP/v12i01/1-10 Korolija Fontana-Giusti, Gordana. "The Unconscious and Space: Venice and the work of Albrecht Dürer", in Architecture and the Unconscious, eds. J. Hendrix and L.Holm, Farnham Surrey: Ashgate, 2016. pp. 27–44, . Wilhelm, Kurth (ed.). The Complete Woodcuts of Albrecht Durer, Dover Publications, 2000. External links The Strange World of Albrecht Dürer at the Sterling and Francine Clark Art Institute. 14 November 2010 – 13 March 2011 Dürer Prints Close-up. Made to accompany The Strange World of Albrecht Dürer at the Sterling and Francine Clark Art Institute. 14 November 2010 – 13 March 2011 Albrecht Dürer: Vier Bücher von menschlicher Proportion (Nuremberg, 1528). Selected pages scanned from the original work. Historical Anatomies on the Web. US National Library of Medicine. "Albrecht Dürer (1471–1528)". In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art Albrecht Durer, Exhibition, Albertina, Vienna. 20 September 2019 – 6 January 2020 1471 births 1528 deaths 15th-century engravers 15th-century German painters 16th-century engravers 16th-century German painters Animal artists Artist authors Artists from Nuremberg Catholic decorative artists Catholic engravers Catholic painters German draughtsmen German engravers German Lutherans German male painters German people of Hungarian descent German printmakers German Renaissance painters German Roman Catholics Heraldic artists Manuscript illuminators Mathematical artists People celebrated in the Lutheran liturgical calendar Renaissance engravers Woodcut designers
2406
https://en.wikipedia.org/wiki/Alban%20Berg
Alban Berg
Alban Maria Johannes Berg ( , ; 9 February 1885 – 24 December 1935) was an Austrian composer of the Second Viennese School. His compositional style combined Romantic lyricism with the twelve-tone technique. Although he left a relatively small oeuvre, he is remembered as one of the most important composers of the 20th century for his expressive style encompassing "entire worlds of emotion and structure". Berg was born and lived in Vienna. He began to compose at the age of fifteen. He studied counterpoint, music theory and harmony with Arnold Schoenberg between 1904 and 1911, and adopted his principles of developing variation and the twelve-tone technique. Berg's major works include the operas Wozzeck (1924) and Lulu (1935, finished posthumously), the chamber pieces Lyric Suite and Chamber Concerto, as well as a Violin Concerto. He also composed a number of songs (lieder). He is said to have brought more "human values" to the twelve-tone system; his works are seen as more "emotional" than those of Schoenberg. His music had a surface glamour that won him admirers when Schoenberg himself had few. Berg died from sepsis in 1935. Life and career Early life Berg was born in Vienna, the third of four children of Johanna and Konrad Berg. His father ran a successful export business, and the family owned several estates in Vienna and the countryside. The family's financial situation turned to the worse after the death of Konrad Berg in 1900, and it particularly affected young Berg, who had to repeat both his sixth and seventh grade to pass the exams. One of his closest lifelong friends and earliest biographer (under the pseudonym Hermann Herrenried), architect Hermann Watznauer, became a father figure (partly at Konrad's request), being ten years Berg's senior. Berg wrote him letters as long as thirty pages, often in florid, dramatic prose with idiosyncratic punctuation. Berg was more interested in literature than music as a child and would consider a career as a writer several times, turning to music slowly and at times unconfidently until the success of Wozzeck. He did not begin to compose until he was fifteen, when he started to teach himself music, although he did take piano lessons from his sister's governess. With Marie Scheuchl, a maid in the family estate of Berghof in Carinthia and fifteen years his senior, he fathered a daughter, Albine, born 4 December 1902. In 1906 Berg met the singer (1885–1976), daughter of a wealthy family (rumoured to be in fact the illegitimate daughter of Emperor Franz Joseph I from his liaison with Anna Nahowski). Despite the outward hostility of her family, the couple married on 3 May 1911. Early works (1907–1914) With little prior music education, Berg began studying counterpoint, music theory, and harmony under Arnold Schoenberg in October 1904. By 1906 he was studying music full-time; by 1907 he began composition lessons. His student compositions included five drafts for piano sonatas. He also wrote songs, including his Seven Early Songs (Sieben frühe Lieder), three of which were Berg's first publicly performed work in a concert that featured the music of Schoenberg's pupils in Vienna that year. The early sketches eventually culminated in the Piano Sonata, Op. 1 (1907–1908); it is one of the most formidable "first" works ever written. Berg studied with Schoenberg for six years until 1911. Among Schoenberg's teachings was the idea that the unity of a musical composition depends upon all its aspects being derived from a single basic idea; this idea was later known as developing variation. Berg passed this on to his students, one of whom, Theodor W. Adorno, stated: "The main principle he conveyed was that of variation: everything was supposed to develop out of something else and yet be intrinsically different". The Piano Sonata is an example—the whole composition is derived from the work's opening quartal gesture and its opening phrase. Berg was a part of Vienna's cultural elite during the heady fin de siècle period. His circle included the musicians Alexander von Zemlinsky and Franz Schreker, the painter Gustav Klimt, the writer and satirist Karl Kraus, the architect Adolf Loos, and the poet Peter Altenberg. In 1913 two of Berg's Altenberg Lieder (1912) premiered in Vienna, conducted by Schoenberg in the infamous Skandalkonzert. Settings of aphoristic poetic utterances, the songs are accompanied by a very large orchestra. The performance caused a riot, and had to be halted. Berg effectively withdrew the work, and it was not performed in full until 1952. The full score remained unpublished until 1966. Berg had a particular interest in the number 23, using it to structure several works. Various suggestions have been made as to the reason for this interest: that he took it from the biorhythms theory of Wilhelm Fliess, in which a 23-day cycle is considered significant, or because he first suffered an asthma attack on the 23rd of the month. Wozzeck (1917–1924) and Lulu (1928–1929) From 1915 to 1918 Berg served in the Austro-Hungarian Army. During a period of leave in 1917 he accelerated work on his first opera, Wozzeck. After the end of World War I, he settled again in Vienna, where he taught private pupils. He also helped Schoenberg run his Society for Private Musical Performances, which sought to create the ideal environment for the exploration and appreciation of unfamiliar new music by means of open rehearsals, repeat performances, and the exclusion of professional critics. In 1924 three excerpts from Wozzeck were performed, which brought Berg his first public success. The opera, which Berg completed in 1922, was first performed on 14 December 1925, when Erich Kleiber conducted the first performance in Berlin. Today, Wozzeck is seen as one of the century's most important works. Berg made a start on his second opera, the three-act Lulu, in 1928 but interrupted the work in 1929 for the concert aria Der Wein which he completed that summer. Der Wein presaged Lulu in a number of ways, including vocal style, orchestration, design and text. Other well-known Berg compositions include the Lyric Suite (1926), which was later shown to employ elaborate cyphers to document a secret love affair; the post-Mahlerian Three Pieces for Orchestra (completed in 1915 but not performed until after Wozzeck); and the Chamber Concerto (Kammerkonzert, 1923–25) for violin, piano, and 13 wind instruments: this latter is written so conscientiously that Pierre Boulez has called it "Berg's strictest composition" and it, too, is permeated by cyphers and posthumously disclosed hidden programs. It was at this time he began exhibiting tone clusters in his works after meeting with American avant-garde composer Henry Cowell, with whom he would eventually form a lifelong friendship. Final years (1930–1935) Life for the musical world was becoming increasingly difficult in the 1930s both in Vienna and Germany due to the rising tide of antisemitism and the Nazi cultural ideology that denounced modernity. Even to have an association with someone who was Jewish could lead to denunciation, and Berg's "crime" was to have studied with the Jewish composer Arnold Schoenberg. Berg found that opportunities for his work to be performed in Germany were becoming rare, and eventually his music was proscribed and placed on the list of degenerate music. In 1932 Berg and his wife acquired an isolated lodge, the Waldhaus on the southern shore of the Wörthersee, near Schiefling am See in Carinthia, where he was able to work in seclusion, mainly on Lulu and the Violin Concerto. At the end of 1934, Berg became involved in the political intrigues around finding a replacement for Clemens Krauss as director of the Vienna State Opera. As more of the performances of his work in Germany were cancelled by the Nazis, who had come to power in early 1933, he needed to ensure the new director would be an advocate for modernist music. Originally, the premiere of Lulu had been planned for the Berlin State Opera, where Erich Kleiber continued to champion his music and had conducted the premiere of Wozzeck in 1925, but now this was looking increasingly uncertain, and Lulu was rejected by the Berlin authorities in the spring of 1934. Kleiber's production of the Lulu symphonic suite on 30 November 1934 in Berlin was also the occasion of his resignation in protest at the extent of conflation of culture with politics. Even in Vienna, the opportunities for the Vienna School of musicians were dwindling. Berg had interrupted the orchestration of Lulu because of an unexpected (and financially much-needed) commission from the Russian-American violinist Louis Krasner for a Violin Concerto (1935). This profoundly elegiac work, composed at unaccustomed speed and posthumously premiered, has become Berg's best-known and most-beloved composition. Like much of his mature work, it employs an idiosyncratic adaptation of Schoenberg's "dodecaphonic" or twelve-tone technique, that enables the composer to produce passages openly evoking tonality, including quotations from historical tonal music, such as a Bach chorale and a Carinthian folk song. The Violin Concerto was dedicated "to the memory of an Angel", Manon Gropius, the deceased daughter of architect Walter Gropius and Alma Mahler. Death Berg died aged 50 in Vienna, on Christmas Eve 1935, from blood poisoning apparently caused by a furuncle on his back, induced by an insect sting that occurred in November. He was buried at the Hietzing Cemetery in Vienna. Before he died, Berg had completed the orchestration of only the first two of the three acts of Lulu. The completed acts were successfully premièred in Zürich in 1937. For personal reasons Helene Berg subsequently imposed a ban on any attempt to "complete" the final act, which Berg had in fact completed in short score. An orchestration was therefore commissioned in secret from Friedrich Cerha and premièred in Paris (under Pierre Boulez) only in 1979, soon after Helene Berg's own death. Legacy Berg is remembered as one of the most important composers of the 20th century and the most widely performed opera composer among the Second Viennese School. He is said to have brought more "human values" to the twelve-tone system, his works seen as more "emotional" than Schoenberg's. Critically, he is seen as having preserved the Viennese tradition in his music. Berg scholar Douglas Jarman writes in The New Grove Dictionary of Music and Musicians that "[as] the 20th century closed, the 'backward-looking' Berg suddenly came as [George] Perle remarked, to look like its most forward-looking composer." The Alban Berg Foundation, founded by the composer's widow in 1969, cultivates the memory and works of the composer, and awards scholarships. The Alban Berg Monument, situated next to the Vienna State Opera and unveiled in 2016, was funded by the Foundation. Alban Berg Quartett was a string quartet named after him, active from 1971 until 2008. The asteroid 4528 Berg is named after him (1983). Major compositions Piano Piano Sonata, Op. 1 String Quartet, Op. 3 Lyric Suite, string quartet Chamber Concerto (1925) for piano, violin and 13 wind instruments Orchestral Three Pieces for Orchestra, Op. 6 Violin Concerto Vocal Seven Early Songs Vier Lieder (Four Songs), Op. 2 Five Orchestral Songs on Postcard Texts of Peter Altenberg, Op. 4 Der Wein Schließe mir die Augen beide Operas Wozzeck, Op. 7 (1925) Lulu (1937) Notes and references Notes References Sources Further reading Adorno, Theodor W. Alban Berg: Master of the Smallest Link. Trans. Juliane Brand and Christopher Hailey. New York: Cambridge University Press, 1991. Brand, Juliane, Christopher Hailey and Donald Harris, eds. The Berg-Schoenberg Correspondence: Selected Letters. New York: Norton, 1987. Carner, Mosco. Alban Berg: The Man and the Work. London: Duckworth, 1975. dos Santos, Silvio J. Narratives of Identity in Alban Berg's 'Lulu'''. Rochester, New York: University of Rochester Press, 2014. Floros, Constantin. Trans. by Ernest Bernhardt-Kabisch. Alban Berg and Hanna Fuchs . Bloomington: Indiana University Press, 2007. Grun, Bernard, ed. Alban Berg: Letters to his Wife. London: Faber and Faber, 1971. Headlam, Dave. The Music of Alban Berg. New Haven: Yale University Press, 1996. Jarman, Douglas. "Dr. Schon's Five-Strophe Aria: Some Notes on Tonality and Pitch Association in Berg's Lulu". Perspectives of New Music 8/2 (Spring/Summer 1970). Jarman, Douglas. "Some Rhythmic and Metric Techniques in Alban Berg's Lulu". The Musical Quarterly 56/3 (July 1970). Jarman, Douglas. "Lulu: The Sketches". International Alban Berg Society Newsletter, 6 (June 1978). Jarman, Douglas. "Countess Geschwitz's Series: A Controversy Resolved?". Proceedings of the Royal Musical Association 107 (1980/81). Jarman, Douglas. "Some Observations on Rhythm, Meter and Tempo in Lulu". In Alban Berg Studien. Ed. Rudolf Klein. Vienna: Universal Edition, 1981. Jarman, Douglas. "Lulu: The Musical and Dramatic Structure". Royal Opera House Covent Garden program notes, 1981. Jarman, Douglas. "The 'Lost' Score of the 'Symphonic Pieces from Lulu". International Alban Berg Society Newsletter 12 (Fall/Winter 1982). Leibowitz, René. Schoenberg and his school; the contemporary stage of the language of music. Trans. Dika Newlin. New York: Philosophical Library, 1949. Redlich, Hans Ferdinand. Alban Berg, the Man and His Music. London: John Calder, 1957. Reich, Willi. The life and work of Alban Berg. Trans. Cornelius Cardew. New York : Da Capo Press, 1982. Schmalfeldt, Janet. "Berg's Path to Atonality: The Piano Sonata, Op. 1". Alban Berg: Historical and Analytical Perspectives. Eds. David Gable and Robert P. Morgan, pp. 79–110. New York: Oxford University Press, 1991. Schweizer, Klaus. Die Sonatensatzform im Schaffen Alban Bergs. Stuttgart: Satz und Druck, 1970. Wilkey, Jay Weldon. Certain Aspects of Form in the Vocal Music of Alban Berg''. Ph.D. thesis. Ann Arbor: Indiana University, 1965. External links Alban Berg biography and works on the UE website (publisher) Vocal texts used by Alban Berg with translations to various languages, LiederNet Archive Alban Berg at Pytheas Center for Contemporary Music albanberg.resampled.de The most comprehensive acoustic representation of Alban Bergs Works in digital realisations. 1885 births 1935 deaths 19th-century Austrian people 20th-century Austrian composers 20th-century Austrian musicians 20th-century Austrian male musicians 20th-century Austrian people 20th-century classical composers Austrian classical composers Austrian male classical composers Austrian opera composers Austro-Hungarian military personnel of World War I Composers from Vienna Deaths due to insect bites and stings Deaths from sepsis Expressionist music Male opera composers Pupils of Arnold Schoenberg Second Viennese School Twelve-tone and serial composers
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Analytical chemistry
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. Instrumental methods Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Hybrid techniques Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where is the absolute error. is the true value. is the observed value. An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in : Standards Standard curve A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. Internal standards Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution. Standard addition The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where kB is Boltzmann's constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency . Shot noise Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where e is the elementary charge and I is the average current. Shot noise is white noise. Flicker noise Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. Environmental noise Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments. Noise reduction Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (µTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations. See also Important publications in analytical chemistry List of chemical analysis methods List of materials analysis methods Measurement uncertainty Metrology Sensory analysis - in the field of Food science Virtual instrumentation Microanalysis Quality of analytical results Working range References Further reading Gurdeep, Chatwal Anand (2008). Instrumental Methods of Chemical Analysis Himalaya Publishing House (India) Ralph L. Shriner, Reynold C. Fuson, David Y. Curtin, Terence C. Morill: The systematic identification of organic compounds - a laboratory manual, Verlag Wiley, New York 1980, 6. edition, . Bettencourt da Silva, R; Bulska, E; Godlewska-Zylkiewicz, B; Hedrich, M; Majcen, N; Magnusson, B; Marincic, S; Papadakis, I; Patriarca, M; Vassileva, E; Taylor, P; Analytical measurement: measurement uncertainty and statistics, 2012, . External links Infografik and animation showing the progress of analytical chemistry aas Atomic Absorption Spectrophotometer Materials science
2411
https://en.wikipedia.org/wiki/A%20cappella
A cappella
A cappella (, , ; ) music is a performance by a singer or a singing group without instrumental accompaniment, or a piece intended to be performed in this fashion. The term a cappella was originally intended to differentiate between Renaissance polyphony and Baroque concertato musical styles. In the 19th century, a renewed interest in Renaissance polyphony, coupled with an ignorance of the fact that vocal parts were often doubled by instrumentalists, led to the term coming to mean unaccompanied vocal music. The term is also used, rarely, as a synonym for alla breve. Early history A cappella could be as old as humanity itself. Research suggests that singing and vocables may have been what early humans used to communicate before the invention of language. The earliest piece of sheet music is thought to have originated from times as early as 2000 BC, while the earliest that has survived in its entirety is from the first century AD: a piece from Greece called the Seikilos epitaph. Religious origins A cappella music was originally used in religious music, especially church music as well as anasheed and zemirot. Gregorian chant is an example of a cappella singing, as is the majority of secular vocal music from the Renaissance. The madrigal, up until its development in the early Baroque into an instrumentally accompanied form, is also usually in a cappella form. The Psalms note that some early songs were accompanied by string instruments, though Jewish and Early Christian music was largely a cappella; the use of instruments has subsequently increased within both of these religions as well as in Islam. Christian The polyphony of Christian a cappella music began to develop in Europe around the late 15th century AD, with compositions by Josquin des Prez. The early a cappella polyphonies may have had an accompanying instrument, although this instrument would merely double the singers' parts and was not independent. By the 16th century, a cappella polyphony had further developed, but gradually, the cantata began to take the place of a cappella forms. Sixteenth-century a cappella polyphony, nonetheless, continued to influence church composers throughout this period and to the present day. Recent evidence has shown that some of the early pieces by Palestrina, such as those written for the Sistine Chapel, were intended to be accompanied by an organ "doubling" some or all of the voices. Such is seen in the life of Palestrina becoming a major influence on Bach, most notably in the Mass in B Minor. Other composers that utilized the a cappella style, if only for the occasional piece, were Claudio Monteverdi and his masterpiece, Lagrime d'amante al sepolcro dell'amata (A lover's tears at his beloved's grave), which was composed in 1610, and Andrea Gabrieli when upon his death many choral pieces were discovered, one of which was in the unaccompanied style. Learning from the preceding two composers, Heinrich Schütz utilized the a cappella style in numerous pieces, chief among these were the pieces in the oratorio style, which were traditionally performed during the Easter week and dealt with the religious subject matter of that week, such as Christ's suffering and the Passion. Five of Schutz's Historien were Easter pieces, and of these the latter three, which dealt with the passion from three different viewpoints, those of Matthew, Luke and John, were all done a cappella style. This was a near requirement for this type of piece, and the parts of the crowd were sung while the solo parts which were the quoted parts from either Christ or the authors were performed in a plainchant. Byzantine Rite In the Byzantine Rite of the Eastern Orthodox Church and the Eastern Catholic Churches, the music performed in the liturgies is exclusively sung without instrumental accompaniment. Bishop Kallistos Ware says, "The service is sung, even though there may be no choir... In the Orthodox Church today, as in the early Church, singing is unaccompanied and instrumental music is not found." This a cappella behavior arises from strict interpretation of Psalm 150, which states, Let every thing that hath breath praise the Lord. Praise ye the Lord. In keeping with this philosophy, early Russian musika which started appearing in the late 17th century, in what was known as khorovïye kontsertï (choral concertos) made a cappella adaptations of Venetian-styled pieces, such as the treatise, Grammatika musikiyskaya (1675), by Nikolai Diletsky. Divine Liturgies and Western Rite Masses composed by famous composers such as Peter Tchaikovsky, Sergei Rachmaninoff, Alexander Arkhangelsky, and Mykola Leontovych are fine examples of this. Opposition to instruments in worship Present-day Christian religious bodies known for conducting their worship services without musical accompaniment include many Oriental Orthodox Churches (such as the Coptic Orthodox Church), many Anabaptist communities (including Old Order Anabaptist groups—such as the Amish, Old German Baptist Brethren, Old Order Mennonites, as well as Conservative Anabaptist groups—such as the Dunkard Brethren Church and Conservative Mennonites), some Presbyterian churches devoted to the regulative principle of worship, Old Regular Baptists, Primitive Baptists, Plymouth Brethren, Churches of Christ, Church of God (Guthrie, Oklahoma), the Reformed Free Methodists, Doukhobors, and the Byzantine Rite of Eastern Christianity. Certain high church services and other musical events in liturgical churches (such as the Roman Catholic Mass and the Lutheran Divine Service) may be a cappella, a practice remaining from apostolic times. Many Mennonites also conduct some or all of their services without instruments. Sacred Harp, a type of folk music, is an a cappella style of religious singing with shape notes, usually sung at singing conventions. Opponents of musical instruments in the Christian worship believe that such opposition is supported by the Christian scriptures and Church history. The scriptures typically referenced are Matthew 26:30; Acts 16:25; Romans 15:9; 1 Corinthians 14:15; Ephesians 5:19; Colossians 3:16; Hebrews 2:12, 13:15 and James 5:13, which show examples and exhortations for Christians to sing. There is no reference to instrumental music in early church worship in the New Testament, or in the worship of churches for the first six centuries. Several reasons have been posited throughout church history for the absence of instrumental music in church worship. Christians who believe in a cappella music today believe that in the Israelite worship assembly during Temple worship only the Priests of Levi sang, played, and offered animal sacrifices, whereas in the church era, all Christians are commanded to sing praises to God. They believe that if God wanted instrumental music in New Testament worship, He would have commanded not just singing, but singing and playing like he did in the Hebrew scriptures. Instruments have divided Christendom since their introduction into worship. They were considered a Roman Catholic innovation, not widely practiced until the 18th century, and were opposed vigorously in worship by a number of Protestant Reformers, including Martin Luther (1483–1546), Ulrich Zwingli, John Calvin (1509–1564) and John Wesley (1703–1791). Alexander Campbell referred to the use of an instrument in worship as "a cow bell in a concert". In Sir Walter Scott's The Heart of Midlothian, the heroine, Jeanie Deans, a Scottish Presbyterian, writes to her father about the church situation she has found in England (bold added): The folk here are civil, and, like the barbarians unto the holy apostle, have shown me much kindness; and there are a sort of chosen people in the land, for they have some kirks without organs that are like ours, and are called meeting-houses, where the minister preaches without a gown. Acceptance of instruments in worship Those who do not adhere to the regulative principle of interpreting Christian scripture, believe that limiting praise to the unaccompanied chant of the early church is not commanded in scripture, and that churches in any age are free to offer their songs with or without musical instruments. Those who subscribe to this interpretation believe that since the Christian scriptures never counter instrumental language with any negative judgment on instruments, opposition to instruments instead comes from an interpretation of history. There is no written opposition to musical instruments in any setting in the first century and a half of Christian churches (33–180 AD). The use of instruments for Christian worship during this period is also undocumented. Toward the end of the 2nd century, Christians began condemning the instruments themselves. Those who oppose instruments today believe these Church Fathers had a better understanding of God's desire for the church, but there are significant differences between the teachings of these Church Fathers and Christian opposition to instruments today. Modern Christians typically believe it is acceptable to play instruments or to attend weddings, funerals, banquets, etc., where instruments are heard playing religious music. The Church Fathers made no exceptions. Since the New Testament never condemns instruments themselves, much less in any of these settings, it is believed that "the church Fathers go beyond the New Testament in pronouncing a negative judgment on musical instruments." Written opposition to instruments in worship began near the turn of the 5th century. Modern opponents of instruments typically do not make the same assessment of instruments as these writers, who argued that God had allowed David the "evil" of using musical instruments in praise. While the Old Testament teaches that God specifically asked for musical instruments, modern concern is for worship based on the New Testament. Since "a cappella" singing brought a new polyphony (more than one note at a time) with instrumental accompaniment, it is not surprising that Protestant reformers who opposed the instruments (such as Calvin and Zwingli) also opposed the polyphony. While Zwingli was destroying organs in Switzerland – Luther called him a fanatic – the Church of England was burning books of polyphony. Some Holiness Churches such as the Free Methodist Church opposed the use of musical instruments in church worship until the mid-20th century. The Free Methodist Church allowed for local church decision on the use of either an organ or piano in the 1943 Conference before lifting the ban entirely in 1955. The Reformed Free Methodist Church and Evangelical Wesleyan Church were formed as a result of a schism with the Free Methodist Church, with the former retaining a cappella worship and the latter retaining the rule limiting the number of instruments in the church to the piano and organ. Jewish While worship in the Temple in Jerusalem included musical instruments, traditional Jewish religious services in the Synagogue, both before and after the last destruction of the Temple, did not include musical instruments given the practice of scriptural cantillation. The use of musical instruments is traditionally forbidden on the Sabbath out of concern that players would be tempted to repair (or tune) their instruments, which is forbidden on those days. (This prohibition has been relaxed in many Reform and some Conservative congregations.) Similarly, when Jewish families and larger groups sing traditional Sabbath songs known as zemirot outside the context of formal religious services, they usually do so a cappella, and Bar and Bat Mitzvah celebrations on the Sabbath sometimes feature entertainment by a cappella ensembles. During the Three Weeks musical instruments are prohibited. Many Jews consider a portion of the 49-day period of the counting of the omer between Passover and Shavuot to be a time of semi-mourning and instrumental music is not allowed during that time. This has led to a tradition of a cappella singing sometimes known as sefirah music. The popularization of the Jewish chant may be found in the writings of the Jewish philosopher Philo, born 20 BC. Weaving together Jewish and Greek thought, Philo promoted praise without instruments, and taught that "silent singing" (without even vocal chords) was better still. This view parted with the Jewish scriptures, where Israel offered praise with instruments by God's own command The shofar is the only temple instrument still being used today in the synagogue, and it is only used from Rosh Chodesh Elul through the end of Yom Kippur. The shofar is used by itself, without any vocal accompaniment, and is limited to a very strictly defined set of sounds and specific places in the synagogue service. However, silver trumpets, as described in Numbers 10:1-18, have been made in recent years and used in prayer services at the Western Wall. In the United States Peter Christian Lutkin, dean of the Northwestern University School of Music, helped popularize a cappella music in the United States by founding the Northwestern A Cappella Choir in 1906. The A Cappella Choir was "the first permanent organization of its kind in America." An a cappella tradition was begun in 1911 by F. Melius Christiansen, a music faculty member at St. Olaf College in Northfield, Minnesota. The St. Olaf College Choir was established as an outgrowth of the local St. John's Lutheran Church, where Christiansen was organist and the choir was composed, at least partially, of students from the nearby St. Olaf campus. The success of the ensemble was emulated by other regional conductors, and a tradition of a cappella choral music was born in the region at colleges like Concordia College (Moorhead, Minnesota), Augustana College (Rock Island, Illinois), Waldorf University (Forest City, Iowa), Luther College (Decorah, Iowa), Gustavus Adolphus College (St. Peter, Minnesota), Augustana College (Sioux Falls, South Dakota), and Augsburg University (Minneapolis, Minnesota). The choirs typically range from 40 to 80 singers and are recognized for their efforts to perfect blend, intonation, phrasing and pitch in a large choral setting. Movements in modern a cappella over the past century include barbershop and doo wop. The Barbershop Harmony Society, Sweet Adelines International, and Harmony Inc. host educational events including Harmony University, Directors University, and the International Educational Symposium, and international contests and conventions, recognizing international champion choruses and quartets. Many a cappella groups can be found in high schools and colleges. There are amateur Barbershop Harmony Society and professional groups that sing a cappella exclusively. Although a cappella is technically defined as singing without instrumental accompaniment, some groups use their voices to emulate instruments; others are more traditional and focus on harmonizing. A cappella styles range from gospel music to contemporary to barbershop quartets and choruses. The Contemporary A Cappella Society (CASA) is a membership option for former students, whose funds support hosted competitions and events. A cappella music was popularized between the late 2000s and the early to mid-2010s with media hits such as the 2009–2014 TV show The Sing-Off and the musical comedy film series Pitch Perfect. Recording artists In July 1943, as a result of the American Federation of Musicians boycott of US recording studios, the a cappella vocal group The Song Spinners had a best-seller with "Comin' In on a Wing and a Prayer". In the 1950s, several recording groups, notably The Hi-Los and the Four Freshmen, introduced complex jazz harmonies to a cappella performances. The King's Singers are credited with promoting interest in small-group a cappella performances in the 1960s. Frank Zappa loved doo wop and a cappella, so Zappa released The Persuasions' first album from his label in 1970. Judy Collins recorded "Amazing Grace" a cappella. In 1983, an a cappella group known as The Flying Pickets had a Christmas 'number one' in the UK with a cover of Yazoo's (known in the US as Yaz) "Only You". A cappella music attained renewed prominence from the late 1980s onward, spurred by the success of Top 40 recordings by artists such as The Manhattan Transfer, Bobby McFerrin, Huey Lewis and the News, All-4-One, The Nylons, Backstreet Boys, Boyz II Men, and *NSYNC. Contemporary a cappella includes many vocal groups and bands who add vocal percussion or beatboxing to create a pop/rock/gospel sound, in some cases very similar to bands with instruments. Examples of such professional groups include Straight No Chaser, Pentatonix, The House Jacks, Rockapella, Mosaic, Home Free and M-pact. There also remains a strong a cappella presence within Christian music, as some denominations purposefully do not use instruments during worship. Examples of such groups are Take 6, Glad and Acappella. Arrangements of popular music for small a cappella ensembles typically include one voice singing the lead melody, one singing a rhythmic bass line, and the remaining voices contributing chordal or polyphonic accompaniment. A cappella can also describe the isolated vocal track(s) from a multitrack recording that originally included instrumentation. These vocal tracks may be remixed or put onto vinyl records for DJs, or released to the public so that fans can remix them. One such example is the a cappella release of Jay-Z's Black Album, which Danger Mouse mixed with the Beatles' White Album to create The Grey Album. On their 1966 album titled Album, Peter, Paul and Mary included the song "Norman Normal". All the sounds on that song, both vocals and instruments, were created by Paul's voice, with no actual instruments used. In 2013, an artist by the name Smooth McGroove rose to prominence with his style of a cappella music. He is best known for his a cappella covers of video game music tracks on YouTube. in 2015, an a cappella version of Jerusalem by multi-instrumentalist Jacob Collier was selected for Beats by Dre "The Game Starts Here" for the England Rugby World Cup campaign. Musical theatre A cappella has been used as the sole orchestration for original works of musical theatre that have had commercial runs Off-Broadway (theatres in New York City with 99 to 500 seats) only four times. The first was Avenue X which opened on 28 January 1994 and ran for 77 performances. It was produced by Playwrights Horizons with book by John Jiler, music and lyrics by Ray Leslee. The musical style of the show's score was primarily Doo-Wop as the plot revolved around Doo-Wop group singers of the 1960s. In 2001, The Kinsey Sicks, produced and starred in the critically acclaimed off-Broadway hit, "DRAGAPELLA! Starring the Kinsey Sicks" at New York's legendary Studio 54. That production received a nomination for a Lucille Lortel award as Best Musical and a Drama Desk nomination for Best Lyrics. It was directed by Glenn Casale with original music and lyrics by Ben Schatz. The a cappella musical Perfect Harmony, a comedy about two high school a cappella groups vying to win the National championship, made its Off Broadway debut at Theatre Row's Acorn Theatre on 42nd Street in New York City in October 2010 after a successful out-of-town run at the Stoneham Theatre, in Stoneham, Massachusetts. Perfect Harmony features the hit music of The Jackson 5, Pat Benatar, Billy Idol, Marvin Gaye, Scandal, Tiffany, The Romantics, The Pretenders, The Temptations, The Contours, The Commodores, Tommy James & the Shondells and The Partridge Family, and has been compared to a cross between Altar Boyz and The 25th Annual Putnam County Spelling Bee. The fourth a cappella musical to appear Off-Broadway, In Transit, premiered 5 October 2010 and was produced by Primary Stages with book, music, and lyrics by Kristen Anderson-Lopez, James-Allen Ford, Russ Kaplan, and Sara Wordsworth. Set primarily in the New York City subway system its score features an eclectic mix of musical genres (including jazz, hip hop, Latin, rock, and country). In Transit incorporates vocal beat boxing into its contemporary a cappella arrangements through the use of a subway beat boxer character. Beat boxer and actor Chesney Snow performed this role for the 2010 Primary Stages production. According to the show's website, it is scheduled to reopen for an open-ended commercial run in the Fall of 2011. In 2011, the production received four Lucille Lortel Award nominations including Outstanding Musical, Outer Critics Circle and Drama League nominations, as well as five Drama Desk nominations including Outstanding Musical and won for Outstanding Ensemble Performance. In December 2016, In Transit became the first a cappella musical on Broadway. Barbershop style Barbershop music is one of several uniquely American art forms. The earliest reports of this style of a cappella music involved African Americans. The earliest documented quartets all began in barber shops. In 1938, the first formal men's barbershop organization was formed, known as the Society for the Preservation and Encouragement of Barber Shop Quartet Singing in America (S.P.E.B.S.Q.S.A), and in 2004 rebranded itself and officially changed its public name to the Barbershop Harmony Society (BHS). Today the BHS has about 22,000 members in approximately 800 chapters across the United States and Canada, and the barbershop style has spread around the world with organizations in many other countries. The Barbershop Harmony Society provides a highly organized competition structure for a cappella quartets and choruses singing in the barbershop style. In 1945, the first formal women's barbershop organization, Sweet Adelines, was formed. In 1953, Sweet Adelines became an international organization, although it did not change its name to Sweet Adelines International until 1991. The membership of nearly 25,000 women, all singing in English, includes choruses in most of the fifty United States as well as in Australia, Canada, Finland, Germany, Ireland, Japan, New Zealand, Spain, Sweden, the United Kingdom, and the Netherlands. Headquartered in Tulsa, Oklahoma, the organization encompasses more than 1,200 registered quartets and 600 choruses. In 1959, a second women's barbershop organization started as a break off from Sweet Adelines due to ideological differences. Based on democratic principles which continue to this day, Harmony, Inc. is smaller than its counterpart, but has an atmosphere of friendship and competition. With about 2,500 members in the United States and Canada, Harmony, Inc. uses the same rules in contest that the Barbershop Harmony Society uses. Harmony, Inc. is registered in Providence, Rhode Island. Amateur and high school The popularity of a cappella among high schools and amateurs was revived by television shows and movies such as Glee and Pitch Perfect. High school groups may have conductors or student leaders who keep the tempo for the group, or beatboxers/vocal percussionists. Since 2013, summer training programs have appeared, such as A Cappella Academy in Los Angeles, California (founded by Ben Bram, Rob Dietz, and Avi Kaplan) and Camp A Cappella in Dayton, Ohio (founded by Deke Sharon and Brody McDonald). These programs teach about different aspects of a cappella music, including vocal performance, arranging, and beatboxing/vocal percussion. In other countries Afghanistan The Islamic Emirate of Afghanistan has no official anthem because of views by the Taliban of music as un-Islamic. However, the de facto national anthem of Afghanistan is an a cappella nasheed, as musical instruments are virtually banned as corrupting and un-Islamic. Iran The first a cappella group in Iran is the Damour Vocal Band, which was able to perform on national television despite a ban on women singing. Pakistan The musical show Strepsils Stereo is credited for introducing the art of a cappella in Pakistan. Sri Lanka Composer Dinesh Subasinghe became the first Sri Lankan to write a cappella pieces for SATB choirs. He wrote "The Princes of the Lost Tribe" and "Ancient Queen of Somawathee" for Menaka De Sahabandu and Bridget Helpe's choirs, respectively, based on historical incidents in ancient Sri Lanka. Voice Print is also a professional a cappella music group in Sri Lanka. Sweden The European a cappella tradition is especially strong in the countries around the Baltic and perhaps most so in Sweden as described by Richard Sparks in his doctoral thesis The Swedish Choral Miracle in 2000. Swedish a cappella choirs have over the last 25 years won around 25% of the annual prestigious European Grand Prix for Choral Singing (EGP) that despite its name is open to choirs from all over the world (see list of laureates in the Wikipedia article on the EGP competition). The reasons for the strong Swedish dominance are as explained by Richard Sparks manifold; suffice to say here that there is a long-standing tradition, an unusually large proportion of the populations (5% is often cited) regularly sing in choirs, the Swedish choral director Eric Ericson had an enormous impact on a cappella choral development not only in Sweden but around the world, and finally there are a large number of very popular primary and secondary schools ('music schools') with high admission standards based on auditions that combine a rigid academic regimen with high level choral singing on every school day, a system that started with Adolf Fredrik's Music School in Stockholm in 1939 but has spread over the country. United Kingdom A cappella has gained attention in the UK in recent years, with many groups forming at British universities by students seeking an alternative singing pursuit to traditional choral and chapel singing. This movement has been bolstered by organisations such as The Voice Festival UK. Western collegiate It is not clear exactly where collegiate a cappella began. The Rensselyrics of Rensselaer Polytechnic Institute (formerly known as the RPI Glee Club), established in 1873 is perhaps the oldest known collegiate a cappella group. The longest continuously singing group is probably The Whiffenpoofs of Yale University, which was formed in 1909 and once included Cole Porter as a member. Collegiate a cappella groups grew throughout the 20th century. Some notable historical groups formed along the way include Colgate University's The Colgate 13 (1942), Dartmouth College's Aires (1946), Cornell University's Cayuga's Waiters (1949) and The Hangovers (1968), the University of Maine Maine Steiners (1958), the Columbia University Kingsmen (1949), the Jabberwocks of Brown University (1949), and the University of Rochester YellowJackets (1956). All-women a cappella groups followed shortly, frequently as a parody of the men's groups: the Smiffenpoofs of Smith College (1936), the Night Owls of Vassar College (1942), The Shwiffs of Connecticut College (The She-Whiffenpoofs, 1944), and The Chattertocks of Brown University (1951). A cappella groups exploded in popularity beginning in the 1990s, fueled in part by a change in style popularized by the Tufts University Beelzebubs and the Boston University Dear Abbeys. The new style used voices to emulate modern rock instruments, including vocal percussion/"beatboxing". Some larger universities now have multiple groups. Groups often join one another in on-campus concerts, such as the Georgetown Chimes' Cherry Tree Massacre, a 3-weekend a cappella festival held each February since 1975, where over a hundred collegiate groups have appeared, as well as International Quartet Champions The Boston Common and the contemporary commercial a cappella group Rockapella. Co-ed groups have produced many up-and-coming and major artists, including John Legend, an alumnus of the Counterparts at the University of Pennsylvania, Sara Bareilles, an alumna of Awaken A Cappella at University of California, Los Angeles, and Mindy Kaling, an alumna of the Rockapellas at Dartmouth College. Mira Sorvino is an alumna of the Harvard-Radcliffe Veritones of Harvard College, where she had the solo on Only You by Yaz. Jewish-interest groups such as Queens College's Tizmoret, Tufts University's Shir Appeal, University of Chicago's Rhythm and Jews, Binghamton University's Kaskeset, Ohio State University's Meshuganotes, Rutgers University's Kol Halayla, New York University's Ani V'Ata, University of California, Los Angeles's Jewkbox, and Yale University's Magevet are also gaining popularity across the U.S. Increased interest in modern a cappella (particularly collegiate a cappella) can be seen in the growth of awards such as the Contemporary A Cappella Recording Awards (overseen by the Contemporary A Cappella Society) and competitions such as the International Championship of Collegiate A Cappella for college groups and the Harmony Sweepstakes for all groups. In December 2009, a new television competition series called The Sing-Off aired on NBC. The show featured eight a cappella groups from the United States and Puerto Rico vying for the prize of $100,000 and a recording contract with Epic Records/Sony Music. The show was judged by Ben Folds, Shawn Stockman, and Nicole Scherzinger and was won by an all-male group from Puerto Rico called Nota. The show returned for a second, third, fourth, and fifth season, won by Committed, Pentatonix, Home Free, and The Melodores from Vanderbilt University respectively. Each year, hundreds of Collegiate a cappella groups submit their strongest songs in a competition to be on The Best of College A Cappella (BOCA), an album compilation of tracks from the best college a cappella groups around the world. The album is produced by Varsity Vocals – which also produces the International Championship of Collegiate A Cappella – and Deke Sharon. ). According to ethnomusicologist Joshua S. Dunchan, "BOCA carries considerable cache and respect within the field despite the appearance of other compilations in part, perhaps, because of its longevity and the prestige of the individuals behind it." Collegiate a cappella groups may also submit their tracks to Voices Only, a two-disc series released at the beginning of each school year. A Voices Only album has been released every year since 2005. In addition, from 2014 to 2019, female-identifying a cappella groups had the opportunity to send their strongest song tracks to the Women's A Cappella Association (WACA) for its annual best of women's a cappella album. WACA offered another medium for women's voices to receive recognition and released an album every year from 2014 to 2019, featuring female-identifying groups from across the United States. The Women's A Cappella Association hosted seven annual festivals in California before ending operations in 2019. South Asian collegiate South Asian a cappella features a mash-up of western and Indian/middle-eastern songs, which places it in the category of South Asian fusion music. A cappella is gaining popularity among South Asians with the emergence of primarily Hindi-English college groups. The first South Asian a cappella group was Penn Masala, founded in 1996 at the University of Pennsylvania. Co-ed South Asian a cappella groups are also gaining in popularity. The first co-ed South Asian a cappella was Anokha, from the University of Maryland, formed in 2001. Also, Dil se, another co-ed a cappella from UC Berkeley, hosts the "Anahat" competition at the University of California, Berkeley annually. Maize Mirchi, the co-ed a cappella group from the University of Michigan hosts "Sa Re Ga Ma Pella", an annual South Asian a cappella invitational with various groups from the Midwest. More South Asian a cappella groups from the Midwest are Chai Town from the University of Illinois Urbana-Champaign and Dhamakapella from Case Western Reserve University. Much like the ICCA competitions, the South Asian A Cappella competitive circuit is governed by the Association of South-Asian A Cappella, a nonprofit formed in 2016. The competitive circuit consists of qualifier, or bid competitions, as well as the national championship, All-American Awaaz. Swaram A Cappella from Texas A&M and Dhamakapella from Case Western Reserve University jointly hold the record for most All-American Awaaz Championships, with two apiece. Emulating instruments In addition to singing words, some a cappella singers also emulate instrumentation by reproducing instrumental sounds with their vocal cords and mouth, often pitched using specialised pitch pipes. One of the earliest 20th century practitioners of this method were The Mills Brothers whose early recordings of the 1930s clearly stated on the label that all instrumentation was done vocally. More recently, "Twilight Zone" by 2 Unlimited was sung a cappella to the instrumentation on the comedy television series Tompkins Square. Another famous example of emulating instrumentation instead of singing the words is the theme song for The New Addams Family series on Fox Family Channel (now Freeform). Groups such as Vocal Sampling and Undivided emulate Latin rhythms a cappella. In the 1960s, the Swingle Singers used their voices to emulate musical instruments to Baroque and Classical music. Vocal artist Bobby McFerrin is famous for his instrumental emulation. A cappella group Naturally Seven recreates entire songs using vocal tones for every instrument. The Swingle Singers used ad libs to sound like instruments, but have been known to produce non-verbal versions of musical instruments. Beatboxing, more accurately known as vocal percussion, is a technique used in a cappella music popularized by the hip-hop community, where rap is often performed a cappella. The advent of vocal percussion added new dimensions to the a cappella genre and has become very prevalent in modern arrangements. Beatboxing is performed often by shaping the mouth, making pops and clicks as pseudo-drum sounds. A popular phrase that beat boxers use to begin their training is the phrase "boots and cats". As the beat boxer progresses in their training, they remove the vowels and continue on from there, emulating a "bts n cts n" sound, a solid base for beginner beat boxers. The phrase has become popular enough to where Siri recites "Boots and Cats" when you ask it to beatbox. Jazz vocalist Petra Haden used a four-track recorder to produce an a cappella version of The Who Sell Out including the instruments and fake advertisements on her album Petra Haden Sings: The Who Sell Out in 2005. Haden has also released a cappella versions of Journey's "Don't Stop Believin'", The Beach Boys' "God Only Knows" and Michael Jackson's "Thriller". See also Lists of a cappella groups List of professional a cappella groups List of collegiate a cappella groups in the United States List of university a cappella groups in the United Kingdom Notes Footnotes References External links A Cappella Music Awards Singing Vocal music Musical terminology Medieval music genres 16th-century music genres 20th-century music genres 21st-century music genres Italian words and phrases
2414
https://en.wikipedia.org/wiki/Arrangement
Arrangement
In music, an arrangement is a musical adaptation of an existing composition. Differences from the original composition may include reharmonization, melodic paraphrasing, orchestration, or formal development. Arranging differs from orchestration in that the latter process is limited to the assignment of notes to instruments for performance by an orchestra, concert band, or other musical ensemble. Arranging "involves adding compositional techniques, such as new thematic material for introductions, transitions, or modulations, and endings. Arranging is the art of giving an existing melody musical variety". In jazz, a memorized (unwritten) arrangement of a new or pre-existing composition is known as a head arrangement. Classical music Arrangement and transcriptions of classical and serious music go back to the early history of this genre. Eighteenth century J.S. Bach frequently made arrangements of his own and other composers' pieces. One example is the arrangement that he made of the Prelude from his Partita No. 3 for solo violin, BWV 1006. Bach transformed this solo piece into an orchestral Sinfonia that introduces his Cantata BWV29. "The initial violin composition was in E major but both arranged versions are transposed down to D, the better to accommodate the wind instruments". "The transformation of material conceived for a single string instrument into a fully orchestrated concerto-type movement is so successful that it is unlikely that anyone hearing the latter for the first time would suspect the existence of the former". Nineteenth and twentieth centuries Piano music In particular, music written for the piano has frequently undergone this treatment, as it has been arranged for orchestra, chamber ensemble or concert band. Beethoven made an arrangement of his Piano Sonata No.9 for string quartet. Conversely, Beethoven also arranged his Grosse Fuge (a movement from one of his late string quartets) for piano duet. Due to his lack of expertise in orchestration, the American composer George Gershwin had his Rhapsody in Blue arranged and orchestrated by Ferde Grofé. Erik Satie wrote his three Gymnopédies for solo piano in 1888. Eight years later, Debussy arranged two of them, exploiting the range of instrumental timbres available in a late 19th century orchestra. "It was Debussy whose 1896 orchestrations of the Gymnopédies put their composer on the map." Pictures at an Exhibition, a suite of ten piano pieces by Modest Mussorgsky, has been arranged over twenty times, notably by Maurice Ravel. Ravel's arrangement demonstrates an "ability to create unexpected, memorable orchestral sonorities". In the second movement, "Gnomus", Mussorgsky's original piano piece simply repeats the following passage: Ravel initially orchestrates it as follows: Repeating the passage, Ravel provides a fresh orchestration "this time with the celesta (replacing the woodwinds) accompanied by string glissandos on the fingerboard". Songs A number of Franz Schubert's songs, originally for voice with piano accompaniment, were arranged by other composers. For example, Schubert's "highly charged, graphic" song Erlkönig (the Erl King) has a piano introduction that conveys "unflagging energy" from the start: The arrangement of this song by Hector Berlioz uses strings to convey faithfully the driving urgency and threatening atmosphere of the original. Berlioz adds colour in bars 6-8 through the addition of woodwind, horns, and a timpani. With typical flamboyance, Berlioz adds spice to the harmony in bar 6 with an E flat in the horn part, creating a half-diminished seventh chord which is not in Schubert's original piano part. There are subtle differences between this and the arrangement of the song by Franz Liszt. The upper string sound is thicker, with violins and violas playing the fierce repeated octaves in unison and bassoons compensating for this by doubling the cellos and basses. There are no timpani, but trumpets and horns add a small jolt to the rhythm of the opening bar, reinforcing the bare octaves of the strings by playing on the second main beat. Unlike Berlioz, Liszt does not alter the harmony, but changes the emphasis somewhat in bar 6, with the note A in the oboes and clarinets grating against rather than blending with the G in the strings. "Schubert has come in for his fair share of transcriptions and arrangements. Most, like Liszt's transcriptions of the Lieder or Berlioz’s orchestration for Erlkönig, tell us more about the arranger that about the original composer, but they can be diverting so long as they are in no way a replacement for the original". Gustav Mahler's Lieder eines fahrenden Gesellen (Songs of a Wayfarer) were originally written for voice with piano accompaniment. The composer's later arrangement of the piano part shows a typical ear for clarity and transparency in re-writing for an ensemble. Here is the original piano version of the closing bars of the second song, "Gieng heit' Morgen über's Feld": The orchestration shows Mahler's attention to detail in bringing out differentiated orchestral colours supplied by woodwind, strings and horn. Mahler uses a harp to convey the original arpeggios supplied by the left hand of the piano part. Mahler also extracts a descending chromatic melodic line, implied by the left hand in bars 2-4 (above) and gives it to the horn. Popular music Popular music recordings often include parts for brass horn sections, bowed strings, and other instruments that were added by arrangers and not composed by the original songwriters. Some pop arrangers even add sections using full orchestra, though this is less common due to the expense. Popular music arrangements may also be considered to include new releases of existing songs with a new musical treatment. These changes can include alterations to tempo, meter, key, instrumentation, and other musical elements. Well-known examples include Joe Cocker's version of the Beatles' "With a Little Help from My Friends," Cream's "Crossroads", and Ike and Tina Turner's version of Creedence Clearwater Revival's "Proud Mary". The American group Vanilla Fudge and British group Yes based their early careers on radical re-arrangements of contemporary hits. Bonnie Pointer performed disco and Motown-themed versions of "Heaven Must Have Sent You." Remixes, such as in dance music, can also be considered arrangements. Jazz Arrangements for small jazz combos are usually informal, minimal, and uncredited. Larger ensembles have generally had greater requirements for notated arrangements, though the early Count Basie big band is known for its many head arrangements, so called because they were worked out by the players themselves, memorized ("in the player's head"), and never written down. Most arrangements for big bands, however, were written down and credited to a specific arranger, as with arrangements by Sammy Nestico and Neal Hefti for Count Basie's later big bands. Don Redman made innovations in jazz arranging as a part of Fletcher Henderson's orchestra in the 1920s. Redman's arrangements introduced a more intricate melodic presentation and soli performances for various sections of the big band. Benny Carter became Henderson's primary arranger in the early 1930s, becoming known for his arranging abilities in addition to his previous recognition as a performer. Beginning in 1938, Billy Strayhorn became an arranger of great renown for the Duke Ellington orchestra. Jelly Roll Morton is sometimes considered the earliest jazz arranger. While he toured around the years 1912 to 1915, he wrote down parts to enable "pickup bands" to perform his compositions. Big-band arrangements are informally called charts. In the swing era they were usually either arrangements of popular songs or they were entirely new compositions. Duke Ellington's and Billy Strayhorn's arrangements for the Duke Ellington big band were usually new compositions, and some of Eddie Sauter's arrangements for the Benny Goodman band and Artie Shaw's arrangements for his own band were new compositions as well. It became more common to arrange sketchy jazz combo compositions for big band after the bop era. After 1950, the big bands declined in number. However, several bands continued and arrangers provided renowned arrangements. Gil Evans wrote a number of large-ensemble arrangements in the late 1950s and early 1960s intended for recording sessions only. Other arrangers of note include Vic Schoen, Pete Rugolo, Oliver Nelson, Johnny Richards, Billy May, Thad Jones, Maria Schneider, Bob Brookmeyer, Lou Marini, Nelson Riddle, Ralph Burns, Billy Byers, Gordon Jenkins, Ray Conniff, Henry Mancini, Ray Reach, Vince Mendoza, and Claus Ogerman. In the 21st century, the big-band arrangement has made a modest comeback. Gordon Goodwin, Roy Hargrove, and Christian McBride have all rolled out new big bands with both original compositions and new arrangements of standard tunes. For instrumental groups Strings The string section is a body of instruments composed of various bowed stringed instruments. By the 19th century orchestral music in Europe had standardized the string section into the following homogeneous instrumental groups: first violins, second violins (the same instrument as the first violins, but typically playing an accompaniment or harmony part to the first violins, and often at a lower pitch range), violas, cellos, and double basses. The string section in a multi-sectioned orchestra is sometimes referred to as the "string choir." The harp is also a stringed instrument, but is not a member of nor homogeneous with the violin family and is not considered part of the string choir. Samuel Adler classifies the harp as a plucked string instrument in the same category as the guitar (acoustic or electric), mandolin, banjo, or zither. Like the harp these instruments do not belong to the violin family and are not homogeneous with the string choir. In modern arranging these instruments are considered part of the rhythm section. The electric bass and upright string bass—depending on the circumstance—can be treated by the arranger as either string section or rhythm section instruments. A group of instruments in which each member plays a unique part—rather than playing in unison with other like instruments—is referred to as a chamber ensemble. A chamber ensemble made up entirely of strings of the violin family is referred to by its size. A string trio consists of three players, a string quartet four, a string quintet five, and so on. In most circumstances the string section is treated by the arranger as one homogeneous unit and its members are required to play preconceived material rather than improvise. A string section can be utilized on its own (this is referred to as a string orchestra) or in conjunction with any of the other instrumental sections. More than one string orchestra can be utilized. A standard string section (vln., vln 2., vla., vcl, cb.) with each section playing unison allows the arranger to create a five-part texture. Often an arranger will divide each violin section in half or thirds to achieve a denser texture. It is possible to carry this division to its logical extreme in which each member of the string section plays his or her own unique part. Size of the string section Artistic, budgetary and logistical concerns, including the size of the orchestra pit or hall will determine the size and instrumentation of a string section. The Broadway musical West Side Story, in 1957, was booked into the Winter Garden theater; composer Leonard Bernstein disliked the playing of "house" viola players he would have to use there, and so he chose to leave them out of the show's instrumentation; a benefit was the creation of more space in the pit for an expanded percussion section. George Martin, producer and arranger for The Beatles, warns arrangers about the intonation problems when only two like instruments play in unison: "After a string quartet, I do not think there is a satisfactory sound for strings until one has at least three players on each line . . . as a rule two stringed instruments together create a slight 'beat' which does not give a smooth sound." Different music directors may use different numbers of string players and different balances between the sections to create different musical effects. While any combination and number of string instruments is possible in a section, a traditional string section sound is achieved with a violin-heavy balance of instruments. Further reading See also Transcription (music) Instrumentation (music) Orchestration Reduction (music) Musical notation American Society of Music Arrangers and Composers Electronic keyboard (or Electronic Music Arranger), which allows for live music arrangement List of music arrangers List of jazz arrangers :Category:Music arrangers References Sources Kers, Robert de (1944). Harmonie et orchestration pour orchestra de danse. Bruxelles: Éditions musicales C. Bens. vii, 126 p. Kidd, Jim (1987). Unsung Heroes, the Jazz Arrangers, from Don Redman to Sy Oliver: [text with recorded examples for a presentation] Prepared on the Occasion of the 16th Annual Canadian Collectors' Congress, 25 April 1987, Toronto, Ont. Toronto: Canadian Collectors' Congress. Photo-reproduced text ([6] leaves) with audiocassette of recorded illustrative musical examples. Randel, Don Michael (2002). The Harvard Concise Dictionary of Music and Musicians. . External links An oral history of pop music arranging, compiled by Richard Niles: Part 1, Part 2, Part 3
2428
https://en.wikipedia.org/wiki/Analog%20computer
Analog computer
An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers Precursors This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of science Derek J. de Solla Price. It was discovered in 1901, in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to , during the Hellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, . The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. A number of similar systems followed, notably those of the Spanish engineer Leonardo Torres Quevedo, who built several machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Modern era The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912, Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I. Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s. World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system (mixing device) to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, the Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands, Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into the Deltar, a hydraulic analogy computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works). The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program. The MONIAC Computer was a hydraulic analogy of a national economy first unveiled in 1949. Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi. Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published in Practical Electronics in the January 1968 edition. Another more modern hybrid computer design was published in Everyday Practical Electronics in 2002. An example described in the EPE hybrid computer was the flight of a VTOL aircraft such as the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. Electronic analog computers The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty. By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation , with as the vertical position of a mass , the damping coefficient, the spring constant and the gravity of Earth. For analog computing, the equation is programmed as . The equivalent analog circuit consists of two integrators for the state variables (speed) and (position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.) The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as: In the US: NASA (Huntsville, Houston), Martin Marietta (Orlando), Lockheed, Westinghouse, Hughes Aircraft In Europe: CEA (French Atomic Energy Commission), MATRA, Aérospatiale, BAC (British Aircraft Corporation). Analog–digital hybrids Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers in embedded systems. In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters. The largest manufacturer of hybrid computers was Electronic Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts. Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft. After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Today there are no more big hybrid computers, but only hybrid components. Implementations Mechanical analog computers While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities. Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System. Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms. For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them. Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery. Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side. At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of 1 the distance from the vertex, and 2 the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle. Although they did not accomplish any computation, electromechanical position servos were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators. Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984). Components Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos. Key electrical/electronic components might include: precision resistors and capacitors operational amplifiers multipliers potentiometers fixed-function generators The core mathematical operations used in an electric analog computer are: addition integration with respect to time inversion multiplication exponentiation logarithm division In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. Limitations In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit. Decline In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still popular among aircraft personnel. Resurgence With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits. Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers. Practical examples These are examples of analog computers that have been constructed or practically used: Boeing B-29 Superfortress Central Fire Control System Deltar E6B flight computer Kerrison Predictor Leonardo Torres y Quevedo's Analogue Calculating Machines based on "fusee sans fin" Librascope, aircraft weight and balance computer Mechanical computer Mechanical integrators, for example, the planimeter Nomogram Norden bombsight Rangekeeper, and related fire control computers Scanimate Torpedo Data Computer Torquetum Water integrator MONIAC, economic modelling Ishiguro Storm Surge Computer Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry. See also Analog neural network Analogical models Chaos theory Differential equation Dynamical system Field-programmable analog array General purpose analog computer Lotfernrohr 7 series of WW II German bombsights Signal (electrical engineering) Voskhod Spacecraft "Globus" IMP navigation instrument XY-writer Notes References A.K. Dewdney. "On the Spaghetti Computer and Other Analog Gadgets for Problem Solving", Scientific American, 250(6):19–26, June 1984. Reprinted in The Armchair Universe, by A.K. Dewdney, published by W.H. Freeman & Company (1988), . Universiteit van Amsterdam Computer Museum. (2007). Analog Computers. Jackson, Albert S., "Analog Computation". London & New York: McGraw-Hill, 1960. External links Biruni's eight-geared lunisolar calendar in "Archaeology: High tech from Ancient Greece", François Charette, Nature 444, 551–552(30 November 2006), The first computers Large collection of electronic analog computers with lots of pictures, documentation and samples of implementations (some in German) Large collection of old analog and digital computers at Old Computer Museum A great disappearing act: the electronic analogue computer Chris Bissell, The Open University, Milton Keynes, UK Accessed February 2007 German computer museum with still runnable analog computers Analog computer basics Analog computer trumps Turing model Harvard Robotics Laboratory Analog Computation The Enns Power Network Computer – an analog computer for the analysis of electric power systems (advertisement from 1955) Librascope Development Company – Type LC-1 WWII Navy PV-1 "Balance Computor" History of computing hardware Greek inventions
2429
https://en.wikipedia.org/wiki/Audio
Audio
Audio most commonly refers to sound, as it is transmitted in signal form. It may also refer to: Sound Audio signal, an electrical representation of sound Audio frequency, a frequency in the audio spectrum Digital audio, representation of sound in a form processed and/or stored by computers or digital electronics Audio, audible content (media) in audio production and publishing Semantic audio, extraction of symbols or meaning from audio Stereophonic audio, method of sound reproduction that creates an illusion of multi-directional audible perspective Audio equipment Entertainment AUDIO (group), an American R&B band of 5 brothers formerly known as TNT Boyz and as B5 Audio (album), an album by the Blue Man Group Audio (magazine), a magazine published from 1947 to 2000 Audio (musician), British drum and bass artist "Audio" (song), a song by LSD Computing , an HTML element, see HTML5 audio See also Acoustic (disambiguation) Audible (disambiguation) Audiobook Radio broadcasting Sound recording and reproduction Sound reinforcement
2444
https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20cultural%20property
Conservation and restoration of cultural property
The conservation and restoration of cultural property focuses on protection and care of cultural property (tangible cultural heritage), including artworks, architecture, archaeology, and museum collections. Conservation activities include preventive conservation, examination, documentation, research, treatment, and education. This field is closely allied with conservation science, curators and registrars. Definition Conservation of cultural property involves protection and restoration using "any methods that prove effective in keeping that property in as close to its original condition as possible for as long as possible." Conservation of cultural heritage is often associated with art collections and museums and involves collection care and management through tracking, examination, documentation, exhibition, storage, preventive conservation, and restoration. The scope has widened from art conservation, involving protection and care of artwork and architecture, to conservation of cultural heritage, also including protection and care of a broad set of other cultural and historical works. Conservation of cultural heritage can be described as a type of ethical stewardship. It may broadly be divided into: Conservation and restoration of movable cultural property Conservation and restoration of immovable cultural property Conservation of cultural property applies simple ethical guidelines: Minimal intervention; Appropriate materials and reversible methods; Full documentation of all work undertaken. Often there are compromises between preserving appearance, maintaining original design and material properties, and ability to reverse changes. Reversibility is now emphasized so as to reduce problems with future treatment, investigation, and use. In order for conservators to decide upon an appropriate conservation strategy and apply their professional expertise accordingly, they must take into account views of the stakeholder, the values, artist's intent, meaning of the work, and the physical needs of the material. Cesare Brandi in his Theory of Restoration, describes restoration as "the methodological moment in which the work of art is appreciated in its material form and in its historical and aesthetic duality, with a view to transmitting it to the future". History and science Key dates Some consider the tradition of conservation of cultural heritage in Europe to have begun in 1565 with the restoration of the Sistine Chapel frescoes, but more ancient examples include the work of Cassiodorus. Brief history The care of cultural heritage has a long history, one that was primarily aimed at fixing and mending objects for their continued use and aesthetic enjoyment. Until the early 20th century, artists were normally the ones called upon to repair damaged artworks. During the 19th century, however, the fields of science and art became increasingly intertwined as scientists such as Michael Faraday began to study the damaging effects of the environment to works of art. Louis Pasteur carried out scientific analysis on paint as well. However, perhaps the first organized attempt to apply a theoretical framework to the conservation of cultural heritage came with the founding in the United Kingdom of the Society for the Protection of Ancient Buildings in 1877. The society was founded by William Morris and Philip Webb, both of whom were deeply influenced by the writings of John Ruskin. During the same period, a French movement with similar aims was being developed under the direction of Eugène Viollet-le-Duc, an architect and theorist, famous for his restorations of medieval buildings. Conservation of cultural heritage as a distinct field of study initially developed in Germany, where in 1888 Friedrich Rathgen became the first chemist to be employed by a Museum, the Koniglichen Museen, Berlin (Royal Museums of Berlin). He not only developed a scientific approach to the care of objects in the collections, but disseminated this approach by publishing a Handbook of Conservation in 1898. The early development of conservation of cultural heritage in any area of the world is usually linked to the creation of positions for chemists within museums. In British archaeology, key research and technical experimentation in conservation was undertaken by women such as Ione Gedye both in the field and in archaeological collections, particularly those of the Institute of Archaeology, London. In the United Kingdom, pioneering research into painting materials and conservation, ceramics, and stone conservation was conducted by Arthur Pillans Laurie, academic chemist and Principal of Heriot-Watt University from 1900. Laurie's interests were fostered by William Holman Hunt. In 1924 the chemist Harold Plenderleith began to work at the British Museum with Alexander Scott in the recently created Research Laboratory, although he was actually employed by the Department of Scientific and Industrial Research in the early years. Plenderleith's appointment may be said to have given birth to the conservation profession in the UK, although there had been craftsmen in many museums and in the commercial art world for generations. This department was created by the museum to address the deteriorating condition of objects in the collection, damages which were a result of their being stored in the London Underground tunnels during the First World War. The creation of this department moved the focus for the development of conservation theory and practice from Germany to Britain, and made the latter a prime force in this fledgling field. In 1956 Plenderleith wrote a significant handbook called The Conservation of Antiquities and Works of Art, which supplanted Rathgen's earlier tome and set new standards for the development of art and conservation science. In the United States, the development of conservation of cultural heritage can be traced to the Fogg Art Museum, and Edward Waldo Forbes, its director from 1909 to 1944. He encouraged technical investigation, and was Chairman of the Advisory Committee for the first technical journal, Technical Studies in the Field of the Fine Arts, published by the Fogg from 1932 to 1942. Importantly he also brought onto the museum staff chemists. Rutherford John Gettens was the first of such in the US to be permanently employed by an art museum. He worked with George L. Stout, the founder and first editor of Technical Studies. Gettens and Stout co-authored Painting Materials: A Short Encyclopaedia in 1942, reprinted in 1966. This compendium is still cited regularly. Only a few dates and descriptions in Gettens' and Stout's book are now outdated. George T. Oliver, of Oliver Brothers Art Restoration and Art Conservation-Boston (Est. 1850 in New York City) invented the vacuum hot table for relining paintings in 1920s; he filed a patent for the table in 1937. Taylor's prototype table, which he designed and constructed, is still in operation. Oliver Brothers is believed to be the first and the oldest continuously operating art restoration company in the United States. The focus of conservation development then accelerated in Britain and America, and it was in Britain that the first International Conservation Organisations developed. The International Institute for Conservation of Historic and Artistic Works (IIC) was incorporated under British law in 1950 as "a permanent organization to co-ordinate and improve the knowledge, methods, and working standards needed to protect and preserve precious materials of all kinds." The rapid growth of conservation professional organizations, publications, journals, newsletters, both internationally and in localities, has spearheaded the development of the conservation profession, both practically and theoretically. Art historians and theorists such as Cesare Brandi have also played a significant role in developing conservation science theory. In recent years ethical concerns have been at the forefront of developments in conservation. Most significantly has been the idea of preventive conservation. This concept is based in part on the pioneering work by Garry Thomson CBE, and his book Museum Environment, first published in 1978. Thomson was associated with the National Gallery in London; it was here that he established a set of guidelines or environmental controls for the best conditions in which objects could be stored and displayed within the museum environment. Although his exact guidelines are no longer rigidly followed, they did inspire this field of conservation. Conservation laboratories Conservators routinely use chemical and scientific analysis for the examination and treatment of cultural works. The modern conservation laboratory uses equipment such as microscopes, spectrometers, and various x-ray regime instruments to better understand objects and their components. The data thus collected helps in deciding the conservation treatments to be provided to the object. Ethics The conservator's work is guided by ethical standards. These take the form of applied ethics. Ethical standards have been established across the world, and national and international ethical guidelines have been written. One such example is: American Institute for Conservation Code of Ethics and Guidelines for Practice Conservation OnLine provides resources on ethical issues in conservation, including examples of codes of ethics and guidelines for professional conduct in conservation and allied fields; and charters and treaties pertaining to ethical issues involving the preservation of cultural property. As well as standards of practice conservators deal with wider ethical concerns, such as the debates as to whether all art is worth preserving. Keeping up with the international contemporary scenario, recent concerns with sustainability in conservation have emerged. The common understanding that "the care of an artifact should not come at the undue expense of the environment" is generally well accepted within the community and is already contemplated in guidelines of diverse institutions related to the field. Practice Preventive conservation Many cultural works are sensitive to environmental conditions such as temperature, humidity and exposure to visible light and ultraviolet radiation. These works must be protected in controlled environments where such variables are maintained within a range of damage-limiting levels. For example, watercolour paintings usually require shielding from sunlight to prevent fading of pigments. Collections care is an important element of museum policy. It is an essential responsibility of members of the museum profession to create and maintain a protective environment for the collections in their care, whether in store, on display, or in transit. A museum should carefully monitor the condition of collections to determine when an artifact requires conservation work and the services of a qualified conservator. Interventive conservation and restoration A teaching programme of interventive conservation was established in the UK at the Institute of Archaeology by Ione Gedye, which is still teaching interventive conservators today. A principal aim of a cultural conservator is to reduce the rate of deterioration of an object. Both non-interventive and interventive methodologies may be employed in pursuit of this goal. Interventive conservation refers to any direct interaction between the conservator and the material fabric of the object. Interventive actions are carried out for a variety of reasons, including aesthetic choices, stabilization needs for structural integrity, or cultural requirements for intangible continuity. Examples of interventive treatments include the removal of discolored varnish from a painting, the application of wax to a sculpture, and the washing and rebinding of a book. Ethical standards within the field require that the conservator fully justify interventive actions and carry out documentation before, during, and after the treatment. One of the guiding principles of conservation of cultural heritage has traditionally been the idea of reversibility, that all interventions with the object should be fully reversible and that the object should be able to be returned to the state in which it was prior to the conservator's intervention. Although this concept remains a guiding principle of the profession, it has been widely critiqued within the conservation profession and is now considered by many to be "a fuzzy concept." Another important principle of conservation is that all alterations should be well documented and should be clearly distinguishable from the original object. An example of a highly publicized interventive conservation effort would be the conservation work conducted on the Sistine Chapel. Example of an archaeological discovery and restoration of a mural painting Example of the restoration of an oil painting Sustainable conservation Recognising that conservation practices should not harm the environment, harm people, or contribute to global warming, the conservation-restoration profession has more recently focused on practices that reduce waste, reduce energy costs, and minimise the use of toxic or harmful solvents. A number of research projects, working groups, and other initiatives have explored how conservation can become a more environmentally sustainable profession. Sustainable conservation practices apply both to work within cultural institutions (e.g. museums, art galleries, archives, libraries, research centres and historic sites) as well as to businesses and private studios. Choice of materials Conservators and restorers use a wide variety of materials - in conservation treatments, and those used to safely transport, display and store cultural heritage items. These materials can include solvents, papers and boards, fabrics, adhesives and consolidants, plastics and foams, wood products, and many others. Stability and longevity are two important factors conservators consider when selecting materials; sustainability is becoming an increasingly important third. Examples of sustainable material choices and practices include: Using biodegradable products or those with less environmental impact where possible; Using 'green solvents' instead of more toxic alternatives, or treatment strategies that use much smaller amounts of solvents - for example, semi-rigid aqueous gels, emulsions or nano materials; Preparing smaller amounts of material (e.g. adhesives) to avoid waste; Observing recommended disposal protocols for chemicals, recyclable materials and compostable materials, particularly to avoid contamination of waterways; Choosing protective work wear that can be washed or cleaned and reused, rather than disposable options; Tracking stock quantities to avoid over-buying, especially for materials with expiration dates; Using durable materials for packing that may be washed and re-used, such as Tyvek or Mylar; Repurposing consumables such as blotting paper, non-woven fabrics, and polyester film when they are no longer fit for their original purpose; Using locally produced products whenever possible, to reduce carbon footprints; Reusing packaging materials such as cardboard boxes, plastic wrap and wooden crates; Using standard sizes of packaging and package designs that reduce waste; These decisions are not always straightforward - for example, installing deionised or distilled water filters in laboratories reduces waste associated with purchasing bottled products, but increases energy consumption. Similarly, locally-made papers and boards may reduce inherent carbon miles but they may be made with pulp sourced from old growth forests. Another dilemma is that many conservation-grade materials are chosen because they do not biodegrade. For example, when selecting a plastic with which to make storage enclosures, conservators prefer to use relatively long-lived plastics because they have better ageing properties - they are less likely to become yellow, leach plasticisers, or lose structural integrity and crumble (examples include polyethylene, polypropylene, and polyester). These plastics will also take longer to degrade in landfill. Energy use Many conservators and cultural organisations have sought to reduce the energy costs associated with controlling indoor storage and display environments (temperature, relative humidity, air filtration, and lighting levels) as well as those associated with the transport of cultural heritage items for exhibitions and loans. In general, lowering the temperature reduces the rate at which damaging chemical reactions occur within materials. For example, storing cellulose acetate film at 10 °C instead of 21 °C is estimated to increase its usable life by over 100 years. Controlling the relative humidity of air helps to reduce hydrolysis reactions and minimises cracking, distortion and other physical changes in hygroscopic materials. Changes in temperature will also bring about changes in relative humidity. Therefore, the conservation profession has placed great importance on controlling indoor environments. Temperature and humidity can be controlled through passive means (e.g. insulation, building design) or active means (air conditioning). Active controls typically require much higher energy use. Energy use increases with specificity - e.g. in will require more energy to maintain a quantity of air to a narrow temperature range (20-22 °C) than to a broad range (18-25 °C). In the past, conservation recommendations have often called for very tight, inflexible temperature and relative humidity set points. In other cases, conservators have recommended strict environmental conditions for buildings that could not reasonably be expected to achieve them, due to the quality of build, local environmental conditions (e.g. recommending temperate conditions for a building located in the tropics) or the financial circumstances of the organisation. This has been an area of particular debate for cultural heritage organisations who lend and borrow cultural items to each other - often, the lender will specify strict environmental conditions as part of the loan agreement, which may be very expensive for the borrowing organisation to achieve, or impossible. The energy costs associated with cold storage and digital storage are also gaining more attention. Cold storage is a very effective strategy to preserve at-risk collections such as cellulose nitrate and cellulose acetate film, which can deteriorate beyond use within decades at ambient conditions. Digital storage costs are rising for both born-digital cultural heritage (photographs, audiovisual, time-based media) and to store digital preservation and access copies of cultural heritage. Digital storage capacity is a major factor in the complexity of preserving digital heritage such as video games, social media, messaging services, and email. Other areas where energy use can be reduced within conservation and restoration include: Exhibition lighting - e.g. using lower-energy LED lighting systems and light sensors that switch lights on only when visitors are present; Installation of green energy capture systems in cultural organisations, such as solar photovoltaic plates, wind energy systems, and heat pumps; Improving the energy performance of cultural buildings by installing insulation, sealing gaps, reducing the number of windows and installing double-glazing: Using microclimates to house small groups of climate-sensitive objects instead of seeking to control the environmental conditions of the whole building. Country by country look United States Heritage Preservation, in partnership with the Institute of Museum and Library Services, a U.S. federal agency, produced The Heritage Health Index. The results of this work was the report A Public Trust at Risk: The Heritage Health Index Report on the State of America's Collections, which was published in December 2005 and concluded that immediate action is needed to prevent the loss of 190 million artifacts that are in need of conservation treatment. The report made four recommendations: Institutions must give priority to providing safe conditions for the collections they hold in trust. Every collecting institution must develop an emergency plan to protect its collections and train staff to carry it out. Every institution must assign responsibility for caring for collections to members of its staff. Individuals at all levels of government and in the private sector must assume responsibility for providing the support that will allow these collections to survive. United Kingdom In October 2006, the Department for Culture, Media and Sport, a governmental department, authored a document: "Understanding the Future: Priorities for England's Museums". This document was based on several years of consultation aimed to lay out the government's priorities for museums in the 21st century. The document listed the following as priorities for the next decade: Museums will fulfil their potential as learning resources (pp 7–10). Museums will be embedded into the delivery of education in every school in the country. Understanding of the effectiveness of museum education will be improved further and best practice built into education programmes. The value of museums' collections as a research resource will be well understood and better links built between the academic community and museums. Museums will embrace their role in fostering, exploring, celebrating and questioning the identities of diverse communities (pp 11–14). The sector needs to work with partners in academia and beyond to create an intellectual framework supporting museums' capacity to tackle issues of identity. The museum sector must continue to develop improved practical techniques for engaging communities of all sorts. Museums' collections will be more dynamic and better used (pp 15–18). Government and the sector will find new ways to encourage museums to collect actively and strategically, especially the record of contemporary society. The sector will develop new collaborative approaches to sharing and developing collections and related expertise. Museums' workforce will be dynamic, highly skilled and representative (pp 17–22). Museums' governing bodies and workforce will be representative of the communities they serve. Find more varied ways for a broader range of skills to come into museums. Improve continuing professional development. Museums will work more closely with each other and partners outside the sector (pp 23–26). A consistent evidence base of the contribution of all kinds of museums to the full range of public service agendas will be developed. There will be deeper and longer lasting partnerships between the national museums and a broader range of regional partners. Museums' international roles will be strengthened to improve museum programmes in this country and Britain's image, reputation and relationships abroad. The conservation profession response to this report was on the whole less than favourable, the Institute of Conservation (ICON) published their response under the title "A Failure of Vision". It had the following to say: Concluding: Further to this the ICON website summary report lists the following specific recommendations: A national survey to find out what the public want from museums, what motivates them to visit them and what makes for a rewarding visit. A review of survey results and prioritisation of the various intrinsic, instrumental and institutional values to provide a clear basis for a 10-year strategy HR consultants to be brought in from the commercial sector to review recruitment, career development and working practices in the national and regional museums. A commitment to examine the potential for using Museum Accreditation as a more effective driver for improving recruitment, diversity, and career development across the sector. DCMS to take full account of the eventual findings of the current Commons Select Committee enquiry into Care of Collections in the final version of this document The adoption of those recommendations of the recent House of Lords inquiry into Science and Heritage which might affect the future of museums. In November 2008, the UK-based think tank Demos published an influential pamphlet entitled It's a material world: caring for the public realm, in which they argue for integrating the public directly into efforts to conserve material culture, particularly that which is in the public, their argument, as stated on page 16, demonstrates their belief that society can benefit from conservation as a paradigm as well as a profession: Training Training in conservation of cultural heritage for many years took the form of an apprenticeship, whereby an apprentice slowly developed the necessary skills to undertake their job. For some specializations within conservation this is still the case. However, it is more common in the field of conservation today that the training required to become a practicing conservator comes from a recognized university course in conservation of cultural heritage. The university can rarely provide all the necessary training in first hand experience that an apprenticeship can, and therefore in addition to graduate level training the profession also tends towards encouraging conservation students to spend time as an intern. Conservation of cultural heritage is an interdisciplinary field as conservators have backgrounds in the fine arts, sciences (including chemistry, biology, and materials science), and closely related disciplines, such as art history, archaeology, and anthropology. They also have design, fabrication, artistic, and other special skills necessary for the practical application of that knowledge. Within the various schools that teach conservation of cultural heritage, the approach differs according to the educational and vocational system within the country, and the focus of the school itself. This is acknowledged by the American Institute for Conservation who advise "Specific admission requirements differ and potential candidates are encouraged to contact the programs directly for details on prerequisites, application procedures, and program curriculum". In France, training for heritage conservation is taught by four schools : , L'École supérieure des Beaux-Arts Tours, Angers, Le Mans, L'Université Paris 1 Panthéon-Sorbonne, Institut national du patrimoine. Associations and professional organizations Societies devoted to the care of cultural heritage have been in existence around the world for many years. One early example is the founding in 1877 of the Society for the Protection of Ancient Buildings in Britain to protect the built heritage, this society continues to be active today. The 14th Dalai Lama and the Tibetan people work to preserve their cultural heritage with organizations including the Tibetan Institute of Performing Arts and an international network of eight Tibet Houses. The built heritage was at the forefront of the growth of member based organizations in the United States. Preservation Virginia, founded in Richmond in 1889 as the Association for the Preservation of Virginia Antiquities, was the United States' first statewide historic preservation group. Today, professional conservators join and take part in the activities of numerous conservation associations and professional organizations with the wider field, and within their area of specialization. In Europe, E.C.C.O. European Confederation of Conservator-Restorers Organisations was established in 1991 by 14 European Conservator-Restorers' Organisations. Currently representing close to 6.000 professionals within 23 countries and 26 members organisations, including one international body (IADA), E.C.C.O. embodies the field of preservation of cultural heritage, both movable and immovable. These organizations exist to "support the conservation professionals who preserve our cultural heritage". This involves upholding professional standards, promoting research and publications, providing educational opportunities, and fostering the exchange of knowledge among cultural conservators, allied professionals, and the public. International cultural property documents See also Conservation and restoration of rail vehicles The Georgian Group Wikipedia:WikiProject Collections Care International Day For Monuments and Sites References Further reading Copies of this volume are available for free pdf download from the Smithsonian's digital library by clicking on the included link. External links BCIN, the Bibliographic Database of the Conservation Information Network CAMEO: Conservation and Art Materials Encyclopedia OnLine Conservation OnLine (CoOL) Resources for Conservation Professionals DOCAM — Documentation and Conservation of the Media Arts Heritage ICOMOS Open Archive: EPrints on Cultural Heritage Publications & Resources at the Getty Conservation Institute Art history Museology Cultural heritage Articles containing video clips Cultural heritage conservation
2493
https://en.wikipedia.org/wiki/Anthroposophy
Anthroposophy
Anthroposophy is a spiritual movement which was founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to engage in spiritual discovery through a mode of thought independent of sensory experience. While much of anthroposophy is pseudoscientific, proponents claim to present their ideas in a manner that is verifiable by rational discourse and say that they seek precision and clarity comparable to that obtained by scientists investigating the physical world. Anthroposophy has its roots in German idealism, mystical philosophies, and pseudoscience including racist pseudoscience. Critics and proponents alike acknowledge his many anti-racist statements, often far ahead of his contemporaries and predecessors still commonly cited today. Steiner chose the term anthroposophy (from Greek , 'human', and sophia, 'wisdom') to emphasize his philosophy's humanistic orientation. He defined it as "a scientific exploration of the spiritual world", Others have variously called it a "philosophy and cultural movement", a "spiritual movement", a "spiritual science", or "a system of thought". Anthroposophical ideas have been employed in alternative movements in many areas including education (both in Waldorf schools and in the Camphill movement), agriculture, medicine, banking, organizational development, and the arts. The main organization for advocacy of Steiner's ideas, the Anthroposophical Society, is headquartered at the Goetheanum in Dornach, Switzerland. Anthroposophy's supporters include writers Saul Bellow, and Selma Lagerlöf, painters Piet Mondrian, Wassily Kandinsky and Hilma af Klint, filmmaker Andrei Tarkovsky, child psychiatrist Eva Frommer, music therapist Maria Schüppel, Romuva religious founder Vydūnas, and former president of Georgia Zviad Gamsakhurdia. Though several prominent members of the Nazi Party were supporters of anthroposophy and its movements, including (an agriculturalist), SS colonel Hermann Schneider, and Gestapo chief Heinrich Müller, anti-Nazis such as Traute Lafrenz, a member of the White Rose resistance movement, were also followers. Rudolf Hess, the adjunct Führer, was a patron of Waldorf schools and a staunch defender of biodynamic agriculture. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history". Many scientists, physicians, and philosophers, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Some of Steiner's ideas that are unsupported or disproven by modern science, including: racial evolution, clairvoyance (Steiner claimed he was clairvoyant), and the Atlantis myth. History The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his Philosophy of Freedom (also translated as The Philosophy of Spiritual Activity and Intuitive Thinking as a Spiritual Path). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought. By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to draw the attention of others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges. By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections. By this time, Steiner had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development. In 1912, Steiner broke away from the Theosophical Society to found an independent group, which he named the Anthroposophical Society. After World War I, members of the young society began applying Steiner's ideas to create cultural movements in areas such as traditional and special education, farming, and medicine. By 1923, a schism had formed between older members, focused on inner development, and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for Spiritual Science. As a spiritual basis for the reborn movement, Steiner wrote a Foundation Stone Meditation which remains a central touchstone of anthroposophical ideas. Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party. In reality, Steiner had both enemies and loyal supporters in the upper echelons of the Nazi regime. Staudenmaier speaks of the "polycratic party-state apparatus", so Nazism's approach to Anthroposophy was not characterized by monolithic ideological unity. When Hess flew to the UK and was imprisoned, their most powerful protector was gone, but Anthroposophists were still not left without supporters among higher-placed Nazis. The Third Reich had banned almost all esoteric organizations, pretending that these are controlled by Jews. The truth was that while Anthroposophists complained of bad press, they were to a surprising extent let be by the Nazi regime, "including outspokenly supportive pieces in the Völkischer Beobachter". Ideological purists from Sicherheitsdienst argued largely in vain against Anthroposophy. According to Staudenmaier, "The prospect of unmitigated persecution was held at bay for years in a tenuous truce between pro-anthroposophical and anti-anthroposophical Nazi factions." According to Hans Büchenbacher, an anthroposophist, the Secretary General of the General Anthroposophical Society, Guenther Wachsmuth, as well as Steiner's widow, Marie Steiner, were “completely pro-Nazi.” Marie Steiner-von Sivers, Guenther Wachsmuth, and Albert Steffen, had publicly expressed sympathy for the Nazi regime since its beginnings; led by such sympathies of their leadership, the Swiss and German Anthroposophical organizations chose for a path conflating accommodation with collaboration, which in the end ensured that while the Nazi regime hunted the esoteric organizations, Gentile Anthroposophists from Nazi Germany and countries occupied by it were let be to a surprising extent. Of course they had some setbacks from the enemies of Anthroposophy among the upper echelons of the Nazi regime, but Anthroposophists also had loyal supporters among them, so overall Gentile Anthroposophists were not badly hit by the Nazi regime. By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas. Etymology and earlier uses of the word Anthroposophy is an amalgam of the Greek terms ( 'human') and ( 'wisdom'). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man." The first known use of the term anthroposophy occurs within Arbatel de magia veterum, summum sapientiae studium, a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his Anthroposophia Theomagica, or a discourse of the nature of man and his state after death. The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term anthroposophy to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term anthroposophy to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work Anthropology: The Study of the Human Soul. In 1872, the philosopher of religion Gideon Spicker used the term anthroposophy to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy." In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication. In the early 1900s, Steiner began using the term anthroposophy (i.e. human wisdom) as an alternative to the term theosophy (i.e. divine wisdom). Central ideas Spiritual knowledge and freedom Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved imagination, inspiration, and intuition. Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science. Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself. Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses. Nature of the human being In Theosophy, Steiner suggested that human beings unite a physical body of substances gathered from and returning to the inorganic world; a life body (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings. Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights. Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life. Steiner described some conditions that determine the interdependence of a person's lives, or karma. Evolution The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals devolve from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth. Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture. Ethics The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold. Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the Representative of Humanity, also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes. Claimed applications Steiner/Waldorf education This is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada. The schools have been founded in a variety of communities: for example in the favelas of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions. Biodynamic agriculture Biodynamic agriculture, is a form of alternative agriculture based on pseudo-scientific and esoteric concepts. It is also the first intentional form of organic farming, began in 1924, when Rudolf Steiner gave a series of lectures published in English as The Agriculture Course. Steiner is considered one of the founders of the modern organic farming movement. Anthroposophical medicine Anthroposophical medicine is a form of alternative medicine based on pseudoscientific and occult notions rather than in science-based medicine. Most anthroposophic medical preparations are highly diluted, like homeopathic remedies, while harmless in of themselves, using them in place of conventional medicine to treat illness is ineffective and risks adverse consequences. One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit. Special needs education and services In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education. Architecture Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects. Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia.<ref>Raab, Klingborg and Fånt, Eloquent Concrete, London: 1979.</ref>Sokolina, Anna, "The Goetheanum Culture in Modern Architecture." In: Science, Education and Experimental Design (Nauka, obrazovaniie i eksperimental'noie proiektirovaniie. Trudy MARKHI) (In Russian), edited by D.O. Shvidkovsky, G.V. Yesaulov, et al., 157-159. Moscow: MARKHI, 2014. 536p. ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture. Eurythmy Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music.Earl j. Ogletree, Eurythmy: A therapeutic art of movement Journal of Special Education Fall 1976 vol. 10 no. 3 305-319 Social finance and entrepreneurship Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the Gemeinschaftsbank für Leihen und Schenken in Bochum, Germany, founded in 1974. Socially responsible banks founded out of anthroposophy include Triodos Bank, founded in the Netherlands in 1980 and also active in the UK, Germany, Belgium, Spain and France. Other examples include Cultura Sparebank which dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s, La Nef in France and RSF Social Financein San Francisco. Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance. Organizational development, counselling and biography work Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries. Various forms of biographic and counselling work have been developed on the basis of anthroposophy. Speech and drama There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers (speech formation, also known as Creative Speech) and the Chekhov Method originated by Michael Chekhov (nephew of Anton Chekhov). Art Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9-metre tall sculpture titled The Representative of Humanity, on display at the Goetheanum. Other Phenomenological approaches to science, pseudo-scientific ideas based on Goethe's philosophy of nature. New approaches to painting and sculpture. John Wilkes' fountain-like flowforms, sculptural forms that guide water into rhythmic movement for the purposes of decoration. Social goals For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is Toward Social Renewal. Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society: Liberty in cultural life Equality of rights, the sphere of legislation Fraternity in the economic sphere Esoteric path Paths of spiritual development According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book How To Know Higher Worlds. The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science. Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation. In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality. Prerequisites to and stages of inner development Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities: Control over one's own thinking Control over one's will Composure Positivity Impartiality Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress.Stein, W. J., Die moderne naturwissenschaftliche Vorstellungsart und die Weltanschauung Goethes, wie sie Rudolf Steiner vertritt, reprinted in Meyer, Thomas, W.J. Stein / Rudolf Steiner, pp. 267–75; 256–7. By focusing on symbolic patterns, images, and poetic mantras, the meditant can achieve consciously directed Imaginations that allow sensory phenomena to appear as the expression of underlying beings of a soul-spiritual nature. By transcending such imaginative pictures, the meditant can become conscious of the meditative activity itself, which leads to experiences of expressions of soul-spiritual beings unmediated by sensory phenomena or qualities. Steiner calls this stage Inspiration. By intensifying the will-forces through exercises such as a chronologically reversed review of the day's events, the meditant can achieve a further stage of inner independence from sensory experience, leading to direct contact, and even union, with spiritual beings ("Intuition") without loss of individual awareness. Spiritual exercises Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality. Place in Western philosophy Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception." Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking. Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis). Christian and Jewish mystical thought have also influenced the development of anthroposophy.Paddock, F. and Spiegler, M., Judaism and Anthroposophy, 2003 Union of science and spirit Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience. Relationship to religion Christ as the center of earthly evolution Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby: Christianity has evolved out of previous religions; The being which manifests in Christianity also manifests in all faiths and religions, and each religion is valid and true for the time and cultural context in which it was born; All historical forms of Christianity need to be transformed considerably to meet the continuing evolution of humanity. Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term. Divergence from conventional Christian thought Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements: One central point of divergence is Steiner's views on reincarnation and karma. Steiner differentiated three contemporary paths by which he believed it possible to arrive at Christ: Through heart-felt experiences of the Gospels; Steiner described this as the historically dominant path, but becoming less important in the future. Through inner experiences of a spiritual reality; this Steiner regarded as increasingly the path of spiritual or religious seekers today. Through initiatory experiences whereby the reality of Christ's death and resurrection are experienced; Steiner believed this is the path people will increasingly take. Steiner also believed that there were two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew, the other child from Nathan, as described in the Gospel of Luke. (The genealogies given in the two gospels diverge some thirty generations before Jesus' birth, and 'Jesus' was a common name in biblical times.) His view of the second coming of Christ is also unusual; he suggested that this would not be a physical reappearance, but that the Christ being would become manifest in non-physical form, visible to spiritual vision and apparent in community life for increasing numbers of people beginning around the year 1933. He emphasized his belief that in the future humanity would need to be able to recognize the Spirit of Love in all its genuine forms, regardless of what name would be used to describe this being. He also warned that the traditional name of the Christ might be misused, and the true essence of this being of love ignored. According to Jane Gilmer, "Jung and Steiner were both versed in ancient gnosis and both envisioned a paradigmatic shift in the way it was delivered." As Gilles Quispel put it, "After all, Theosophy is a pagan, Anthroposophy a Christian form of modern Gnosis." Maria Carlson stated "Theosophy and Anthroposophy are fundamentally Gnostic systems in that they posit the dualism of Spirit and Matter." R. McL. Wilson in The Oxford Companion to the Bible agrees that Steiner and Anthroposophy are under the influence of gnosticism. Judaism Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society.Peter Staudenmeier, "Rudolf Steiner and the Jewish Question" , Leo Baeck Institute Yearbook, Vol. 50, No. 1 (2005): 127-147. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction. Steiner has financed the publication of the book Die Entente-Freimaurerei und der Weltkrieg (1919) by ; Steiner also wrote the foreword for the book, partly based upon his own ideas. The publication comprised a conspiracy theory according to whom World War I was a consequence of a collusion of Freemasons and Jews - still favorite scapegoats of the conspiracy theorists - their purpose being the destruction of Germany. The writing was later enthusiastically received by the Nazi Party. According to Dick Taverne Steiner was a Nazi (i.e. a member of the NSDAP). In his later life, Steiner was accused by the Nazis of being a Jew, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps.Lorenzo Ravagli, Unter Hammer und Hakenkreuz: Der völkisch-nationalsozialistische Kampf gegen die Anthroposophie, Verlag Freies Geistesleben, Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy. There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (stand as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa. Christian Community Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the Movement for Religious Renewal, now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination. Reception Anthroposophy's supporters include Saul Bellow, Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer.Fiona Subotsky, Eva Frommer (Obituary) , 29 April 2005. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific. Scientific basis Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries. Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy Geisteswissenschaft'' (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences. Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world." Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content. Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what she calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis. Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results. Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation." According to Dan Dugan, Steiner was a champion of the following pseudoscientific claims, also championed by Waldorf schools: wrong color theory; obtuse criticism of the theory of relativity; weird ideas about motions of the planets; supporting vitalism; doubting germ theory; weird approach to physiological systems; "the heart is not a pump". Religious nature As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory. Scholars claim Anthroposophy is influenced by Christian Gnosticism. The Catholic Church did in 1919 issue an edict classifying Anthroposophy as "a neognostic heresy" despite the fact that Steiner "very well respected the distinctions on which Catholic dogma insists". Postwar relations however have been much warmer with Karol Wojtyla (later coronated Pope John Paul II) enjoying close relationship with Anthroposophical insight and community, even publishing literary forward for friend and mentor M. Kotlarczyk in addition. Meanwhile the Steiner-inspired Christian Community congregations as mentioned in the Catholic Encyclopedia are received ecumenically as recognized and genuine Christian believers worldwide. Despite this, some Baptist and mainstream academical heresiologists still appear inclined to agree with the more narrow prior edict of 1919 on dogma and the Lutheran (Missouri Sinod) apologist and heresiologist Eldon K. Winker asserts Steiner's Christology as being very similar to Cerinthus. Steiner did perceive "a distinction between the human person Jesus, and Christ as the divine Logos", which could be construed as Gnostic but not docetic. Steiner-inspired Christian Community congregations globally were among the first congregations and denominations to include both female and gay pastors as early as the 1920's - in contrast with the range of other Christian denominations in their times, and well beyond. Statements on race Some anthroposophical ideas challenged the National Socialist racialist and nationalistic agenda. In contrast, some American educators have criticized Waldorf schools for failing to equally include the fables and myths of all cultures, instead favoring European stories over African ones. From the mid-1930s on, National Socialist ideologues attacked the anthroposophical worldview as being opposed to Nazi racist and nationalistic principles; anthroposophy considered "Blood, Race and Folk" as primitive instincts that must be overcome. An academic analysis of the educational approach in public schools noted that "[A] naive version of the evolution of consciousness, a theory foundational to both Steiner's anthroposophy and Waldorf education, sometimes places one race below another in one or another dimension of development. It is easy to imagine why there are disputes [...] about Waldorf educators' insisting on teaching Norse tales and Greek myths to the exclusion of African modes of discourse." In response to such critiques, the Anthroposophical Society in America published in 1998 a statement clarifying its stance: We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race. Tommy Wieringa, a Dutch writer who grew among Anthroposophists, commenting upon an essay by the Anthroposophist , he wrote "It was a meeting of old acquaintances: Nazi leaders such as Rudolf Hess and Heinrich Himmler already recognized a kindred spirit in Rudolf Steiner, with his theories about racial purity, esoteric medicine and biodynamic agriculture." Adolf Hitler personally called on the Nazis to declare "war against Steiner" and he immediately had to flee to Switzerland, never to set foot in Germany again. The racism of Anthroposophy is spiritual and paternalistic (i.e. benevolent), while the racism of fascism is materialistic and often malign. Olav Hammer, university professor expert in new religious movements and Western esotericism, confirms that now the racist and anti-Semitic character of Steiner's teachings can no longer be denied, even if that is "spiritual racism". Imperfect as critics assert his writings and work may be, both critics and proponents alike nevertheless acknowledge his extensive body of anti-racist statements, often far ahead of his predecessors and even contemporaries who are still commonly cited in academia and beyond in modern times. See also Esotericism in Germany and Austria Pneumatosophy Spiritual but not religious References External links Rudolf Steiner Archive (Steiner's works online) Steiner's complete works in German Rudolf Steiner Handbook (PDF; 56 MB) Goetheanum Societies General Anthroposophical Society Anthroposophical Society in America Anthroposophical Society in Great Britain Anthroposophical Initiatives in India Anthroposophical Society in Australia Anthroposophical Society in New Zealand Esoteric Christianity Esoteric schools of thought Rudolf Steiner Spirituality
2499
https://en.wikipedia.org/wiki/Asynchronous%20Transfer%20Mode
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and ITU-T (formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. In the OSI reference model data link layer (layer 2), the basic transfer units are called frames. In ATM these frames are of a fixed length (53 octets) called cells. This differs from approaches such as Internet Protocol (IP) (OSI layer 3) or Ethernet (also layer 2) that use variable-sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent (dedicated connections that are usually preconfigured by the service provider), or switched (set up on a per-call basis using signaling and disconnected when the call is terminated). The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the synchronous optical networking and synchronous digital hierarchy (SONET/SDH) backbone of the public switched telephone network and in the Integrated Services Digital Network (ISDN) but has largely been superseded in favor of next-generation networks based on IP technology. Wireless and mobile ATM never established a significant foothold. Protocol architecture To minimize queuing delay and packet delay variation (PDV), all ATM cells are the same small size. Reduction of PDV is particularly important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal is an inherently real-time process. The decoder needs an evenly spaced stream of data items. At the time of the design of ATM, 155 Mbit/s synchronous digital hierarchy with 135 Mbit/s payload was considered a fast optical network link, and many plesiochronous digital hierarchy links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the US, and 2 to 34 Mbit/s in Europe. At 155 Mbit/s, a typical full-length 1,500 byte Ethernet frame would take 77.42 µs to transmit. On a lower-speed 1.544 Mbit/s T1 line, the same packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over. This was considered unacceptable for speech traffic. The design of ATM aimed for a low-jitter network interface. Cells were introduced to provide short queuing delays while continuing to support datagram traffic. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise between larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice. Parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) improve performance for voice applications. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of almost 30, reducing the need for echo cancellers. Cell structure An ATM cell consists of a 5-byte header and a 48-byte payload. ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format. GFC The generic flow control (GFC) field is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks such as a distributed queue dual bus (DQDB) ring. The GFC field was designed to give the User-Network Interface (UNI) 4 bits in which to negotiate multiplexing and flow control among the cells of various ATM connections. However, the use and exact values of the GFC field have not been standardized, and the field is always set to 0000. VPI Virtual path identifier (8 bits UNI, or 12 bits NNI) VCI Virtual channel identifier (16 bits) PT Payload type (3 bits) Bit 3 (msbit): Network management cell. If 0, user data cell and the following apply: Bit 2: Explicit forward congestion indication (EFCI); 1 = network congestion experienced Bit 1 (lsbit): ATM user-to-user (AAU) bit. Used by AAL5 to indicate packet boundaries. CLP Cell loss priority (1-bit) HEC Header error control (8-bit CRC, polynomial = X8 + X2 + X + 1) ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general-purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type: network management segment, network management end-to-end, resource management, and reserved for future use. Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found. A UNI cell reserves the GFC field for a local flow control and sub-multiplexing system between users. This was intended to allow several terminals to share a single network connection in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default. The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each. Service types ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis. Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 µs to transmit on a 10 Gbit/s network, reducing the motivation for small cells to reduce jitter due to contention. The increased link speeds by themselves do not eliminate jitter due to queuing. ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP, Ethernet VLANs, VxLAN, MPLS, and multi-protocol support over SONET. Virtual circuits An ATM network must establish a connection before two parties can send cells to each other. This is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. Call admission is then performed by the network to confirm that the requested resources are available and that a route exists for the connection. Motivation ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on a user-network interface (at the edge of the network), or if it is sent on a network-network interface (inside the network). As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in Frame Relay and the logical channel number and logical channel group number in X.25. Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths. Types ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes. PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service contract) and the two endpoints. ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end station. One application for SVCs is to carry individual telephone calls when a network of telephone switches are interconnected using ATM. SVCs were also used in attempts to replace local area networks with ATM. Routing Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP. Traffic engineering Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which quality of service (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection. CBR Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant. VBR Variable bit rate: an average or Sustainable Cell Rate (SCR) is specified, which can peak at a certain level, a PCR, for a maximum interval before being problematic. ABR Available bit rate: a minimum guaranteed rate is specified. UBR Unspecified bit rate: traffic is allocated to all remaining transmission capacity. VBR has real-time and non-real-time variants, and serves for bursty traffic. Non-real-time is sometimes abbreviated to vbr-nrt. Most traffic classes also introduce the concept of cell-delay variation tolerance (CDVT), which defines the clumping of cells in time. Traffic policing To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs): usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVt alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVt and an SCR and Maximum Burst Size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells. If the traffic on a virtual circuit is exceeding its traffic contract, as determined by the GCRA, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as potentially redundant). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been created that will discard a whole series of cells until the next packet starts. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU. Traffic shaping Traffic shaping usually takes place in the network interface card (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate. Reference model The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers: At the physical network level, ATM specifies a layer that is equivalent to the OSI physical layer. The ATM layer 2 roughly corresponds to the OSI data link layer. The OSI network layer is implemented as the ATM adaptation layer (AAL). Deployment ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price/performance of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum. Wireless or mobile ATM Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as "crossover switch", which is similar to the MSC (mobile switching center) of GSM networks. The advantage of wireless ATM is its high bandwidth and high speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from the University of Cambridge Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs. See also VoATM ATM25 Notes References ATM Cell formats- Cisco Systems External links ATM Info and resources ATM ChipWeb - Chip and NIC database A tutorial from Juniper web site ATM Tutorial ITU-T recommendations Link protocols Networking standards
2503
https://en.wikipedia.org/wiki/African%20National%20Congress
African National Congress
The African National Congress (ANC) is a social-democratic political party in South Africa. A liberation movement known for its opposition to apartheid, it has governed the country since 1994, when the first post-apartheid election resulted in Nelson Mandela being elected as President of South Africa. Cyril Ramaphosa, the incumbent national President, has served as President of the ANC since 18 December 2017. Founded on 8 January 1912 in Bloemfontein as the South African Native National Congress, the organisation was formed to agitate for the rights of black South Africans. When the National Party government came to power in 1948, the ANC's central purpose became to oppose the new government's policy of institutionalised apartheid. To this end, its methods and means of organisation shifted; its adoption of the techniques of mass politics, and the swelling of its membership, culminated in the Defiance Campaign of civil disobedience in 1952–53. The ANC was banned by the South African government between April 1960 – shortly after the Sharpeville massacre – and February 1990. During this period, despite periodic attempts to revive its domestic political underground, the ANC was forced into exile by increasing state repression, which saw many of its leaders imprisoned on Robben Island. Headquartered in Lusaka, Zambia, the exiled ANC dedicated much of its attention to a campaign of sabotage and guerrilla warfare against the apartheid state, carried out under its military wing, uMkhonto we Sizwe, which was founded in 1961 in partnership with the South African Communist Party (SACP). The ANC was condemned as a terrorist organisation by the governments of South Africa, the United States, and the United Kingdom. However, it positioned itself as a key player in the negotiations to end apartheid, which began in earnest after the ban was repealed in 1990. In the post-apartheid era, the ANC continues to identify itself foremost as a liberation movement, although it is also a registered political party. Partly due to its Tripartite Alliance with the South African Communist Party (SACP) and the Congress of South African Trade Unions, it has retained a comfortable electoral majority at the national level and in most provinces, and has provided each of South Africa's five presidents since 1994. South Africa is considered a dominant-party state. However, the ANC's electoral majority has declined consistently since 2004, and in the most recent elections – the 2021 local elections – its share of the national vote dropped below 50% for the first time ever. Over the last decade, the party has been embroiled in a number of controversies, particularly relating to widespread allegations of political corruption among its members. History Origins A successor of the Cape Colony's Imbumba Yamanyama organisation, the ANC was founded as the South African Native National Congress in Bloemfontein on 8 January 1912, and was renamed the African National Congress in 1923. Pixley ka Isaka Seme, Sol Plaatje, John Dube, and Walter Rubusana founded the organisation, who, like much of the ANC's early membership, were from the conservative, educated, and religious professional classes of black South African society. Although they would not take part, Xhosa chiefs would show huge support for the organisation. As a result, King Jongilizwe donated 50 cows to during its founding. Around 1920, in a partial shift away from its early focus on the "politics of petitioning", the ANC developed a programme of passive resistance directed primarily at the expansion and entrenchment of pass laws. When Josiah Gumede took over as ANC president in 1927, he advocated for a strategy of mass mobilisation and cooperation with the Communist Party, but was voted out of office in 1930 and replaced with the traditionalist Seme, whose leadership saw the ANC's influence wane. In the 1940s, Alfred Bitini Xuma revived some of Gumede's programmes, assisted by a surge in trade union activity and by the formation in 1944 of the left-wing ANC Youth League under a new generation of activists, among them Walter Sisulu, Nelson Mandela, and Oliver Tambo. After the National Party was elected into government in 1948 on a platform of apartheid, entailing the further institutionalisation of racial segregation, this new generation pushed for a Programme of Action which explicitly advocated African nationalism and led the ANC, for the first time, to the sustained use of mass mobilisation techniques like strikes, stay-aways, and boycotts. This culminated in the 1952–53 Defiance Campaign, a campaign of mass civil disobedience organised by the ANC, the Indian Congress, and the coloured Franchise Action Council in protest of six apartheid laws. The ANC's membership swelled. In June 1955, it was one of the groups represented at the multi-racial Congress of the People in Kliptown, Soweto, which ratified the Freedom Charter, from then onwards a fundamental document in the anti-apartheid struggle. The Charter was the basis of the enduring Congress Alliance, but was also used as a pretext to prosecute hundreds of activists, among them most of the ANC's leadership, in the Treason Trial. Before the trial was concluded, the Sharpeville massacre occurred on 21 March 1960. In the aftermath, the ANC was banned by the South African government. It was not unbanned until February 1990, almost three decades later. Exile in Lusaka After its banning in April 1960, the ANC was driven underground, a process hastened by a barrage of government banning orders, by an escalation of state repression, and by the imprisonment of senior ANC leaders pursuant to the Rivonia trial and Little Rivonia trial. From around 1963, the ANC effectively abandoned much of even its underground presence inside South Africa and operated almost entirely from its external mission, with headquarters first in Morogoro, Tanzania, and later in Lusaka, Zambia. For the entirety of its time in exile, the ANC was led by Tambo – first de facto, with president Albert Luthuli under house arrest in Zululand; then in an acting capacity, after Luthuli's death in 1967; and, finally, officially, after a leadership vote in 1985. Also notable about this period was the extremely close relationship between the ANC and the reconstituted South African Communist Party (SACP), which was also in exile. uMkhonto we Sizwe In 1961, partly in response to the Sharpeville massacre, leaders of the SACP and the ANC formed a military body, Umkhonto we Sizwe (MK, Spear of the Nation), as a vehicle for armed struggle against the apartheid state. Initially, MK was not an official ANC body, nor had it been directly established by the ANC National Executive: it was considered an autonomous organisation, until such time as the ANC formally recognised it as its armed wing in October 1962. In the first half of the 1960s, MK was preoccupied with a campaign of sabotage attacks, especially bombings of unoccupied government installations. As the ANC reduced its presence inside South Africa, however, MK cadres were increasingly confined to training camps in Tanzania and neighbouring countries – with such exceptions as the Wankie Campaign, a momentous military failure. In 1969, Tambo was compelled to call the landmark Morogoro Conference to address the grievances of the rank-and-file, articulated by Chris Hani in a memorandum which depicted MK's leadership as corrupt and complacent. Although MK's malaise persisted into the 1970s, conditions for armed struggle soon improved considerably, especially after the Soweto uprising of 1976 in South Africa saw thousands of students – inspired by Black Consciousness ideas – cross the borders to seek military training. MK guerrilla activity inside South Africa increased steadily over this period, with one estimate recording an increase from 23 incidents in 1977 to 136 incidents in 1985. In the latter half of the 1980s, a number of South African civilians were killed in these attacks, a reversal of the ANC's earlier reluctance to incur civilian casualties. Fatal attacks included the 1983 Church Street bombing, the 1985 Amanzimtoti bombing, the 1986 Magoo's Bar bombing, and the 1987 Johannesburg Magistrate's Court bombing. Partly in retaliation, the South African Defence Force increasingly crossed the border to target ANC members and ANC bases, as in the 1981 raid on Maputo, 1983 raid on Maputo, and 1985 raid on Gaborone. During this period, MK activities led the governments of Margaret Thatcher and Ronald Reagan to condemn the ANC as a terrorist organisation. In fact, neither the ANC nor Mandela were removed from the U.S. terror watch list until 2008. The animosity of Western regimes was partly explained by the Cold War context, and by the considerable amount of support – both financial and technical – that the ANC received from the Soviet Union. Negotiations to end apartheid From the mid-1980s, as international and internal opposition to apartheid mounted, elements of the ANC began to test the prospects for a negotiated settlement with the South African government, although the prudence of abandoning armed struggle was an extremely controversial topic within the organisation. Following preliminary contact between the ANC and representatives of the state, business, and civil society, President F. W. de Klerk announced in February 1990 that the government would unban the ANC and other banned political organisations, and that Mandela would be released from prison. Some ANC leaders returned to South Africa from exile for so-called "talks about talks", which led in 1990 and 1991 to a series of bilateral accords with the government establishing a mutual commitment to negotiations. Importantly, the Pretoria Minute of August 1990 included a commitment by the ANC to unilaterally suspend its armed struggle. This made possible the multi-party Convention for a Democratic South Africa and later the Multi-Party Negotiating Forum, in which the ANC was regarded as the main representative of the interests of the anti-apartheid movement. However, ongoing political violence, which the ANC attributed to a state-sponsored third force, led to recurrent tensions. Most dramatically, after the Boipatong massacre of June 1992, the ANC announced that it was withdrawing from negotiations indefinitely. It faced further casualties in the Bisho massacre, the Shell House massacre, and in other clashes with state forces and supporters of the Inkatha Freedom Party (IFP). However, once negotiations resumed, they resulted in November 1993 in an interim Constitution, which governed South Africa's first democratic elections on 27 April 1994. In the elections, the ANC won an overwhelming 62.65% majority of the vote. Mandela was elected president and formed a coalition Government of National Unity, which, under the provisions of the interim Constitution, also included the National Party and IFP. The ANC has controlled the national government since then. Breakaways In the post-apartheid era, two significant breakaway groups have been formed by former ANC members. The first is the Congress of the People, founded by Mosiuoa Lekota in 2008 in the aftermath of the Polokwane elective conference, when the ANC declined to re-elect Thabo Mbeki as its president and instead compelled his resignation from the national presidency. The second breakaway is the Economic Freedom Fighters, founded in 2013 after youth leader Julius Malema was expelled from the ANC. Before these, the most important split in the ANC's history occurred in 1959, when Robert Sobukwe led a splinter faction of African nationalists to the new Pan Africanist Congress. Current structure and composition Leadership Under the ANC constitution, every member of the ANC belongs to a local branch, and branch members select the organisation's policies and leaders. They do so primarily by electing delegates to the National Conference, which is currently convened every five years. Between conferences, the organisation is led by its 86-member National Executive Committee, which is elected at each conference. The most senior members of the National Executive Committee are the so-called Top Six officials, the ANC president primary among them. A symmetrical process occurs at the subnational levels: each of the nine provincial executive committees and regional executive committees are elected at provincial and regional elective conferences respectively, also attended by branch delegates; and branch officials are elected at branch general meetings. Leagues The ANC has three leagues: the Women's League, the Youth League and the Veterans' League. Under the ANC constitution, the leagues are autonomous bodies with the scope to devise their own constitutions and policies; for the purpose of national conferences, they are treated somewhat like provinces, with voting delegates and the power to nominate leadership candidates. Tripartite Alliance The ANC is recognised as the leader of a three-way alliance, known as the Tripartite Alliance, with the SACP and Congress of South African Trade Unions (COSATU). The alliance was formalised in mid-1990, after the ANC was unbanned, but has deeper historical roots: the SACP had worked closely with the ANC in exile, and COSATU had aligned itself with the Freedom Charter and Congress Alliance in 1987. The membership and leadership of the three organisations has traditionally overlapped significantly. The alliance constitutes a de facto electoral coalition: the SACP and COSATU do not contest in government elections, but field candidates through the ANC, hold senior positions in the ANC, and influence party policy. However, the SACP, in particular, has frequently threatened to field its own candidates, and in 2017 it did so for the first time, running against the ANC in by-elections in the Metsimaholo municipality, Free State. Electoral candidates Under South Africa's closed-list proportional representation electoral system, parties have immense power in selecting candidates for legislative bodies. The ANC's internal candidate selection process is overseen by so-called list committees and tends to involve a degree of broad democratic participation, especially at the local level, where ANC branches vote to nominate candidates for the local government elections. Between 2003 and 2008, the ANC also gained a significant number of members through the controversial floor crossing process, which occurred especially at the local level. The leaders of the executive in each sphere of government – the president, the provincial premiers, and the mayors – are indirectly elected after each election. In practice, the selection of ANC candidates for these positions is highly centralised, with the ANC caucus voting together to elect a pre-decided candidate. Although the ANC does not always announce whom its caucuses intend to elect, the National Assembly has thus far always elected the ANC president as the national president. Cadre deployment The ANC has adhered to a formal policy of cadre deployment since 1985. In the post-apartheid era, the policy includes but is not exhausted by selection of candidates for elections and government positions: it also entails that the central organisation "deploys" ANC members to various other strategic positions in the party, state, and economy. Ideology and policies The ANC prides itself on being a broad church, and, like many dominant parties, resembles a catch-all party, accommodating a range of ideological tendencies. As Mandela told the Washington Post in 1990:The ANC has never been a political party. It was formed as a parliament of the African people. Right from the start, up to now, the ANC is a coalition, if you want, of people of various political affiliations. Some will support free enterprise, others socialism. Some are conservatives, others are liberals. We are united solely by our determination to oppose racial oppression. That is the only thing that unites us. There is no question of ideology as far as the odyssey of the ANC is concerned, because any question approaching ideology would split the organization from top to bottom. Because we have no connection whatsoever except at this one, of our determination to dismantle apartheid. The post-apartheid ANC continues to identify itself foremost as a liberation movement, pursuing "the complete liberation of the country from all forms of discrimination and national oppression". It also continues to claim the Freedom Charter of 1955 as "the basic policy document of the ANC". However, as NEC member Jeremy Cronin noted in 2007, the various broad principles of the Freedom Charter have been given different interpretations, and emphasised to differing extents, by different groups within the organisation. Nonetheless, some basic commonalities are visible in the policy and ideological preferences of the organisation's mainstream. Non-racialism The ANC is committed to the ideal of non-racialism and to opposing "any form of racial, tribalistic or ethnic exclusivism or chauvinism". National Democratic Revolution The 1969 Morogoro Conference committed the ANC to a "national democratic revolution [which] – destroying the existing social and economic relationship – will bring with it a correction of the historical injustices perpetrated against the indigenous majority and thus lay the basis for a new – and deeper internationalist – approach". For the movement's intellectuals, the concept of the National Democratic Revolution (NDR) was a means of reconciling the anti-apartheid and anti-colonial project with a second goal, that of establishing domestic and international socialism – the ANC is a member of the Socialist International, and its close partner the SACP traditionally conceives itself as a vanguard party. Specifically, and as implied by the 1969 document, NDR doctrine entails that the transformation of the domestic political system (national struggle, in Joe Slovo's phrase) is a precondition for a socialist revolution (class struggle). The concept remained important to ANC intellectuals and strategists after the end of apartheid. Indeed, the pursuit of the NDR is one of the primary objectives of the ANC as set out in its constitution. As with the Freedom Charter, the ambiguity of the NDR has allowed it to bear varying interpretations. For example, whereas SACP theorists tend to emphasise the anti-capitalist character of the NDR, some ANC policymakers have construed it as implying the empowerment of the black majority even within a market-capitalist scheme. Economic interventionism Since 1994, consecutive ANC governments have held a strong preference for a significant degree of state intervention in the economy. The ANC's first comprehensive articulation of its post-apartheid economic policy framework was set out in the Reconstruction and Development Programme (RDP) document of 1994, which became its electoral manifesto and also, under the same name, the flagship policy of Nelson Mandela's government. The RDP aimed both to redress the socioeconomic inequalities created by colonialism and apartheid, and to promote economic growth and development; state intervention was judged a necessary step towards both goals. Specifically, the state was to intervene in the economy through three primary channels: a land reform programme; a degree of economic planning, through industrial and trade policy; and state investments in infrastructure and the provision of basic services, including health and education. Although the RDP was abandoned in 1996, these three channels of state economic intervention have remained mainstays of subsequent ANC policy frameworks. Neoliberal turn In 1996, Mandela's government replaced the RDP with the Growth Employment and Redistribution (GEAR) programme, which was maintained under President Thabo Mbeki, Mandela's successor. GEAR has been characterised as a neoliberal policy, and it was disowned by both COSATU and the SACP. While some analysts viewed Mbeki's economic policy as undertaking the uncomfortable macroeconomic adjustments necessary for long-term growth, others – notably Patrick Bond – viewed it as a reflection of the ANC's failure to implement genuinely radical transformation after 1994. Debate about ANC commitment to redistribution on a socialist scale has continued: in 2013, the country's largest trade union, the National Union of Metalworkers of South Africa, withdrew its support for the ANC on the basis that "the working class cannot any longer see the ANC or the SACP as its class allies in any meaningful sense". It is evident, however, that the ANC never embraced free-market capitalism, and continued to favour a mixed economy: even as the debate over GEAR raged, the ANC declared itself (in 2004) a social-democratic party, and it was at that time presiding over phenomenal expansions of its black economic empowerment programme and the system of social grants. Developmental state As its name suggests, the RDP emphasised state-led development – that is, a developmental state – which the ANC has typically been cautious, at least in its rhetoric, to distinguish from the neighbouring concept of a welfare state. In the mid-2000s, during Mbeki's second term, the notion of a developmental state was revived in South African political discourse when the national economy worsened; and the 2007 National Conference whole-heartedly endorsed developmentalism in its policy resolutions, calling for a state "at the centre of a mixed economy... which leads and guides that economy and which intervenes in the interest of the people as a whole". The proposed developmental state was also central to the ANC's campaign in the 2009 elections, and it remains a central pillar of the policy of the current government, which seeks to build a "capable and developmental" state. In this regard, ANC politicians often cite China as an aspirational example. A discussion document ahead of the ANC's 2015 National General Council proposed that:China['s] economic development trajectory remains a leading example of the triumph of humanity over adversity. The exemplary role of the collective leadership of the Communist Party of China in this regard should be a guiding lodestar of our own struggle. Radical economic transformation Towards the end of Jacob Zuma's presidency, an ANC faction aligned to Zuma pioneered a new policy platform referred to as radical economic transformation (RET). Zuma announced the new focus on RET during his February 2017 State of the Nation address, and later that year, explaining that it had been adopted as ANC policy and therefore as government policy, defined it as entailing "fundamental change in the structures, systems, institutions and patterns of ownership and control of the economy, in favour of all South Africans, especially the poor". Arguments for RET were closely associated with the rhetorical concept of white monopoly capital. At the 54th National Conference in 2017, the ANC endorsed a number of policy principles advocated by RET supporters, including their proposal to pursue land expropriation without compensation as a matter of national policy. Foreign policy and relations The ANC has long had close ties with China and the Chinese Communist Party (CCP), with the CCP having supported ANC's struggle of apartheid since 1961. In 2008, the two parties signed a memorandum of understanding to train ANC members in China. President Cyril Ramaphosa and the ANC have not condemned the Russian invasion of Ukraine, and have faced criticism from opposition parties, public commentators, academics, civil society organisations, and former ANC members due to this. The ANC youth wing has meanwhile condemned sanctions against Russia and denounced NATO's eastward expansion as "fascistic". Officials representing the ANC Youth League acted as international observers for Russia's staged referendum to annex Ukrainian territory conquered during the war. Symbols and media Flag and logo The logo of the ANC incorporates a spear and shield – symbolising the historical and ongoing struggle, armed and otherwise, against colonialism and racial oppression – and a wheel, which is borrowed from the 1955 Congress of the People campaign and therefore symbolises a united and non-racial movement for freedom and equality. The logo uses the same colours as the ANC flag, which comprises three horizontal stripes of equal width in black, green and gold. The black symbolises the native people of South Africa; the green represents the land of South Africa; and the gold represents the country's mineral and other natural wealth. The black, green and gold tricolour also appeared on the flag of the KwaZulu bantustan and appears on the flag of the ANC's rival, the IFP; and all three colours appear in the post-apartheid South African national flag. Publications Since 1996, the ANC Department of Political Education has published the quarterly Umrabulo political discussion journal; and ANC Today, a weekly online newsletter, was launched in 2001 to offset the alleged bias of the press. In addition, since 1972, it has been traditional for the ANC president to publish annually a so-called January 8 Statement: a reflective letter sent to members on 8 January, the anniversary of the organisation's founding. In earlier years, the ANC published a range of periodicals, the most important of which was the monthly journal Sechaba (1967–1990), printed in the German Democratic Republic and banned by the apartheid government. The ANC's Radio Freedom also gained a wide audience during apartheid. Amandla "Amandla ngawethu", or the Sotho variant "Matla ke arona", is a common rallying call at ANC meetings, roughly meaning "power to the people". It is also common for meetings to sing so-called struggle songs, which were sung during anti-apartheid meetings and in MK camps. In the case of at least two of these songs – Dubula ibhunu and Umshini wami – this has caused controversy in recent years. Criticism and controversy Corruption controversies The most prominent corruption case involving the ANC relates to a series of bribes paid to companies involved in the ongoing R55 billion Arms Deal saga, which resulted in a long term jail sentence to then Deputy President Jacob Zuma's legal adviser Schabir Shaik. Zuma, the former South African President, was charged with fraud, bribery and corruption in the Arms Deal, but the charges were subsequently withdrawn by the National Prosecuting Authority of South Africa due to their delay in prosecution. The ANC has also been criticised for its subsequent abolition of the Scorpions, the multidisciplinary agency that investigated and prosecuted organised crime and corruption, and was heavily involved in the investigation into Zuma and Shaik. Tony Yengeni, in his position as chief whip of the ANC and head of the Parliaments defence committee has recently been named as being involved in bribing the German company ThyssenKrupp over the purchase of four corvettes for the SANDF. Other recent corruption issues include the sexual misconduct and criminal charges of Beaufort West municipal manager Truman Prince, and the Oilgate scandal, in which millions of Rand in funds from a state-owned company were funnelled into ANC coffers. The ANC has also been accused of using government and civil society to fight its political battles against opposition parties such as the Democratic Alliance. The result has been a number of complaints and allegations that none of the political parties truly represent the interests of the poor. This has resulted in the "No Land! No House! No Vote!" Campaign which became very prominent during elections. In 2018, the New York Times reported on the killings of ANC corruption whistleblowers. During an address on 28 October 2021, former president Thabo Mbeki commented on the history of corruption within the ANC. He reflected that Mandela had already warned in 1997 that the ANC was attracting individuals who viewed the party as "a route to power and self-enrichment." He added that the ANC leadership "did not know how to deal with this problem." During a lecture on 10 December, Mbeki reiterated concerns about "careerists" within the party, and stressed the need to "purge itself of such members". Condemnation over Secrecy Bill In late 2011 the ANC was heavily criticised over the passage of the Protection of State Information Bill, which opponents claimed would improperly restrict the freedom of the press. Opposition to the bill included otherwise ANC-aligned groups such as COSATU. Notably, Nelson Mandela and other Nobel laureates Nadine Gordimer, Archbishop Desmond Tutu, and F. W. de Klerk have expressed disappointment with the bill for not meeting standards of constitutionality and aspirations for freedom of information and expression. Role in the Marikana killings The ANC have been criticised for its role in failing to prevent 16 August 2012 massacre of Lonmin miners at Marikana in the Northwest. Some allege that Police Commissioner Riah Phiyega and Police Minister Nathi Mthethwa may have given the go ahead for the police action against the miners on that day. Commissioner Phiyega of the ANC came under further criticism as being insensitive and uncaring when she was caught smiling and laughing during the Farlam Commission's video playback of the 'massacre'. Archbishop Desmond Tutu has announced that he no longer can bring himself to exercise a vote for the ANC as it is no longer the party that he and Nelson Mandela fought for, and that the party has now lost its way, and is in danger of becoming a corrupt entity in power. Financial mismanagement Since at least 2017, the ANC has encountered significant problems related to financial mismanagement. According to a report filed by the former treasurer-general Zweli Mkhize in December 2017, the ANC was technically insolvent as its liabilities exceeded its assets. These problems continued into the second half of 2021. By September 2021, the ANC had reportedly amassed a debt exceeding R200-million, including over R100-million owed to the South African Revenue Service. Beginning in May 2021, the ANC failed to pay monthly staff salaries on time. Having gone without pay for three consecutive months, workers planned a strike in late August 2021. In response, the ANC initiated a crowdfunding campaign to raise money for staff salaries. By November 2021, its Cape Town staff was approaching their fourth month without salaries, while medical aid and provident fund contributions had been suspended in various provinces. The party has countered that the Political Party Funding Act, which prohibits anonymous contributions, has dissuaded some donors who previously injected money for salaries. State capture In January 2018, then-President Jacob Zuma established the Zondo Commission to investigate allegations of state capture, corruption, and fraud in the public sector. Over the following four years, the Commission heard testimony from over 250 witnesses and collected more than 150,000 pages of evidence. After several extensions, the first part of the final three-part report was published on 4 January 2022. The report found that the ANC, including Zuma and his political allies, had benefited from the extensive corruption of state enterprises, including the South African Revenue Service. It also found that the ANC "simply did not care that state entities were in decline during state capture or they slept on the job – or they simply didn't know what to do." Electoral history National Assembly elections National Council of Provinces elections Provincial legislatures Municipal elections See also :Category:Members of the African National Congress Democratic Alliance Solomon Mahlangu Freedom College Step-aside rule State v. Ebrahim United Democratic Front References External links Official website Sechaba archive Mayibuye archive Attacks attributed to the ANC on the START terrorism database List of articles & videos about the ANC Response by the ANC General Secretary to COSATU's assessment, 2004 Anti-Apartheid organisations National liberation movements Organisations associated with apartheid Political parties in South Africa Political parties based in Johannesburg Social democratic parties in South Africa Full member parties of the Socialist International Political parties established in 1912 Organizations formerly designated as terrorist by the United States
2504
https://en.wikipedia.org/wiki/Amphetamine
Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Uses Medical Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy (a sleep disorder), and obesity, and is sometimes prescribed for its past medical indications, particularly for depression and chronic pain. Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. Enhancing performance Cognitive performance In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Physical performance Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. Recreational Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops. Contraindications According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. Adverse effects The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the USFDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes. Physical Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. Psychological At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. Reinforcement disorders Addiction Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction. Biomolecular mechanisms Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. Pharmacological treatments there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil. Behavioral treatments A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. Dependence and withdrawal Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect. According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. Overdose An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Toxicity In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. Psychosis An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use. Drug interactions Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Pharmacology Pharmacodynamics Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. Dopamine In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Norepinephrine Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Serotonin Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Other neurotransmitters, peptides, hormones, and enzymes Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma. Pharmacokinetics The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Pharmacomicrobiomics The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics. Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds. Related endogenous compounds Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Chemistry Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . Substituted derivatives The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Synthesis Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Detection in body fluids Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. History, society, and culture Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. Legal status As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Pharmaceutical products Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below. Notes Image legend Reference notes References External links  – Dextroamphetamine  – Levoamphetamine Comparative Toxicogenomics Database entry: Amphetamine Comparative Toxicogenomics Database entry: CARTPT 5-HT1A agonists Anorectics Aphrodisiacs Carbonic anhydrase activators Drugs acting on the cardiovascular system Drugs acting on the nervous system Drugs in sport Ergogenic aids Euphoriants Excitatory amino acid reuptake inhibitors German inventions Management of obesity Narcolepsy Nootropics Norepinephrine-dopamine releasing agents Phenethylamines Stimulants Substituted amphetamines TAAR1 agonists Attention deficit hyperactivity disorder management VMAT inhibitors World Anti-Doping Agency prohibited substances
2506
https://en.wikipedia.org/wiki/Asynchronous%20communication
Asynchronous communication
In telecommunications, asynchronous communication is transmission of data, generally without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream. Any timing required to recover data from the communication symbols is encoded within the symbols. The most significant aspect of asynchronous communications is that data is not transmitted at regular intervals, thus making possible variable bit rate, and that the transmitter and receiver clock generators do not have to be exactly synchronized all the time. In asynchronous transmission, data is sent one byte at a time and each byte is preceded by start and stop bits. Physical layer In asynchronous serial communication in the physical protocol layer, the data blocks are code words of a certain word length, for example octets (bytes) or ASCII characters, delimited by start bits and stop bits. A variable length space can be inserted between the code words. No bit synchronization signal is required. This is sometimes called character oriented communication. Examples include MNP2 and modems older than V.2. Data link layer and higher Asynchronous communication at the data link layer or higher protocol layers is known as statistical multiplexing, for example Asynchronous Transfer Mode (ATM). In this case, the asynchronously transferred blocks are called data packets, for example ATM cells. The opposite is circuit switched communication, which provides constant bit rate, for example ISDN and SONET/SDH. The packets may be encapsulated in a data frame, with a frame synchronization bit sequence indicating the start of the frame, and sometimes also a bit synchronization bit sequence, typically 01010101, for identification of the bit transition times. Note that at the physical layer, this is considered as synchronous serial communication. Examples of packet mode data link protocols that can be/are transferred using synchronous serial communication are the HDLC, Ethernet, PPP and USB protocols. Application layer An asynchronous communication service or application does not require a constant bit rate. Examples are file transfer, email and the World Wide Web. An example of the opposite, a synchronous communication service, is realtime streaming media, for example IP telephony, IPTV and video conferencing. Electronically mediated communication Electronically mediated communication often happens asynchronously in that the participants do not communicate concurrently. Examples include email and bulletin-board systems, where participants send or post messages at different times than they read them. The term "asynchronous communication" acquired currency in the field of online learning, where teachers and students often exchange information asynchronously instead of synchronously (that is, simultaneously), as they would in face-to-face or in telephone conversations. See also Synchronization in telecommunications Asynchronous serial communication Asynchronous system Asynchronous circuit Anisochronous Baud rate Plesiochronous Plesiochronous Digital Hierarchy (PDH) References Synchronization Telecommunications techniques
2508
https://en.wikipedia.org/wiki/Artillery
Artillery
Artillery is a class of heavy military ranged weapons that launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower. Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armour. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannon, and in contemporary usage, usually refers to shell-firing guns, howitzers, and mortars (collectively called barrel artillery, cannon artillery or gun artillery) and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions. By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, and with some nations coastal has been a naval or marine responsibility. In the 20th century, target acquisition devices (such as radar) and techniques (such as sound ranging and flash spotting) emerged, primarily for artillery. These are usually utilized by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological, and in some armies, provision of these are the responsibility of the artillery arm. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the god of war". Artillery piece Although not called by that name, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy which not only severely limited the kinetic energy of the projectiles, it also required the construction of very large engines to accumulate sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16 kilojoules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240 kilojoules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350 megajoules. From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare. Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today. In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms. Crew Some armed forces use the term "gunners" for the soldiers and sailors with the primary function of using artillery. The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank, and junior non-commissioned officers are "Bombardiers" in some artillery arms. Batteries are roughly equivalent to a company in the infantry, and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps. The term "artillery" also designates a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons. Tactics During military operations, field artillery has the role of providing support to other arms in combat or of attacking targets, particularly in-depth. Broadly, these effects fall into two categories, aiming either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and from blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also suppress or neutralize the enemy by obscuring their view. Fire may be directed by an artillery observer or another observer, including crewed and uncrewed aircraft, or called onto map coordinates. Military doctrine has had a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordnance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery. The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, used primarily to attack or defend a fixed-line; and the self-propelled gun, intended to accompany a mobile force and to provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part is close-quarters combat, with the possible exception of artillery reconnaissance teams. Etymology The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from French atelier, meaning the place where manual work is done. Another suggestion is that it originates from the 13th century and the Old French artillier, designating craftsmen and manufacturers of all materials and warfare equipments (spears, swords, armor, war machines); and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century. Another suggestion is that it comes from the Italian arte de tirare (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia. History Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery. Medieval During medieval times, more types of artillery were developed, most notably the trebuchet. Traction trebuchets, using manpower to launch projectiles, have been used in ancient China since the 4th century as anti-personnel weapons. However, in the 12th century, the counterweight trebuchet was introduced, with the earliest mention of it being in 1187. Invention of gunpowder Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming Dynasty treatise Huolongjing. With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the madfaa) and reached Europe in the 13th century, in a very limited manner. In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out. As small smooth-bore barrels, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannon were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders. Expansion of use In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely bombardas, colebratas, and falconetes. In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces. The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannon were only powerful enough to knock in roofs, and could not penetrate castle walls. However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. The cannon during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time. Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery. The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444. Early cannon were not always reliable; King James II of Scotland was killed by the accidental explosion of one of his own cannon, imported from Flanders, at the siege of Roxburgh Castle in 1460. The able use of artillery supported to a large measure the expansion and defense of the Portuguese Empire, as it was a necessary tool that allowed the Portuguese to face overwhelming odds both on land and sea from Morocco to Asia. In great sieges and in sea battles, the Portuguese demonstrated a level of proficiency in the use of artillery after the beginning of the 16th century unequalled by contemporary European neighbours, in part due to the experience gained in intense fighting in Morocco, which served as a proving ground for artillery and its practical application, and made Portugal a forerunner in gunnery for decades. During the reign of King Manuel (1495–1521) at least 2017 cannon were sent to Morocco for garrison defense, with more than 3000 cannon estimated to have been required during that 26 year period. An especially noticeable division between siege guns and anti-personnel guns inhanced the use and effectiveness of Portuguese firearms above contemporary powers, making cannon the most essential element in the Portuguese arsenal. The three major classes of Portuguese artillery were anti-personnel guns with a high borelength (including: rebrodequim, berço, falconete, falcão, sacre, áspide, cão, serpentina and passavolante); bastion guns which could batter fortifications (camelete, leão, pelicano, basilisco, águia, camelo, roqueira, urso); and howitzers that fired large stone cannonballs in an elevated arch, weighted up to 4000 pounds and could fire incendiary devices, such as a hollow iron ball filled with pitch and fuse, designed to be fired at close range and burst on contact. The most popular in Portuguese arsenals was the berço, a 5cm, one pounder bronze breech-loading cannon that weighted 150kg with an effective range of 600 meters. A tactical innovation the Portuguese introduced in fort defense was the use of combinations of projectiles against massed assaults. Although canister shot had been developed in the early 15th century, the Portuguese were the first to employ it extensively, and Portuguese engineers invented a canister round which consisted of a thin lead case filled with iron pellets, that broke up at the muzzle and scattered its contents in a narrow pattern. An innovation which Portugal adopted in advance of other European powers was fuse-delayed action shells, and were commonly used in 1505. Although dangerous, their effectiveness meant a sixth of all rounds used by the Portuguese in Morocco were of the fused-shell variety. The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation." When the Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms were already using cannons. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannons (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannons), and other fire-works. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By the early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180 and 260 pounders, weighing anywhere between 3–8 tons, measuring between 3–6 m. Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used Kunikuzushi (国崩し – Japanese breech-loading swivel gun) and Ōzutsu (大筒 – large size Tanegashima) as their largest firearms. Smoothbores Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman. Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel. The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier. The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance. The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry. Shells, explosive-filled fused projectiles, were in use by the 15th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel ribauldequin (known as "organ guns"), were also produced. The 1650 book by Kazimierz Siemienowicz Artis Magnae Artilleriae pars prima was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual. One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come. Modern rocket artillery can trace its heritage back to the Mysorean rockets of India. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799, these rockets were used with considerable effect against the British. After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812. Napoleonic With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specially trained artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault. Physically, cannons continued to become smaller and lighter. During the Seven Years War, King Frederick II of Prussia used these advances to deploy horse artillery that could move throughout the battlefield. Frederick also introduced the reversible iron ramrod, which was much more resistant to breakage than older wooden designs. The reversibility aspect also helped increase the rate of fire, since a soldier would no longer have to worry about what end of the ramrod they were using. Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles. Modern The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity. After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars, the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out. First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge. His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil. What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel. Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities. Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns. The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. The gun used cased ammunition, was breech-loading, had modern sights, and a self-contained firing mechanism. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match. Indirect fire Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired shrapnel indirectly against advancing French troops. In 1882, Russian Lieutenant Colonel KG Guk published Indirect Fire for Field Artillery, which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer". A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US). The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight. In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I. An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used. In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st century with refinements to calculations enabled by computers and improved data capture about non-standard conditions. The British major-general Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael). Major General J.B.A. Bailey, British Army (retired) wrote: An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I. Precision-guidance Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process. Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries. The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction.<ref>{{Cite book|last=Chikammadu|first=Ali Caleb|title=Enotenplato The Chronicle of Military Doctrine'|publisher=Lulu.com|date=September 3, 2019|isbn=9780359806997|pages=196}}</ref> See Long Range Precision Fires (LRPF), Joint terminal attack controllerAmmunition One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine ) and the provision of fuzes, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell. A round of artillery ammunition comprises four components: Fuze Projectile Propellant Primer Fuzes Fuzes are the devices that initiate an artillery projectile, either to detonate its High Explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types: impact (including graze and delay) mechanical time including airburst proximity sensor including airburst programmable electronic detonation including airburst Most artillery fuzes are nose fuzes. However, base fuzes have been used with armor-piercing shells and for squash head (High-Explosive Squash Head (HESH) or High Explosive, Plastic (HEP) anti-tank shells). At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base. Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor or Concrete-Piercing (AP or CP) fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst. HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length. Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions. Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher. The first radar proximity fuzes (perhaps originally codenamed 'VT' and later called Variable Time (VT)) were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts. However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'Controlled Variable Time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact. The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against kamikaze as well as in Britain against V-1 flying bombs. Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter. Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths. A new type of artillery fuze will appear soon. In addition to other functions these offer some course correction capability, not full precision but sufficient to significantly reduce the dispersion of the shells on the ground. Projectiles The projectile is the munition or "bullet" fired downrange. This may be an explosive device. Projectiles have traditionally been classified as "shot" or "shell", the former being solid and the latter having some form of "payload". Shells can be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time. Payloads have included: Bursting: high-explosive, white phosphorus, coloured marker, chemical, nuclear devices; high-explosive anti-tank and canister may be considered special types of bursting shell. Nose ejection: shrapnel, star, incendiary and flechette (a more modern version of shrapnel). Base ejection: Dual-Purpose Improved Conventional Munition bomblets, which arm themselves and function after a set number of rotations after having been ejected from the projectile (this produces unexploded sub-munitions, or "duds", which remain dangerous), scatterable mines, illuminating, coloured flare, smoke, incendiary, propaganda, chaff (foil to jam radars) and modern exotics such as electronic payloads and sensor-fuzed munitions. Stabilization Rifled: Artillery projectiles have traditionally been spin-stabilised, meaning that they spin in flight so that gyroscopic forces prevent them from tumbling. Spin is induced by gun barrels having rifling, which engages a soft metal band around the projectile, called a "driving band" (UK) or "rotating band" (U.S.). The driving band is usually made of copper, but synthetic materials have been used. Smoothbore/fin-stabilized: In modern artillery, smoothbore barrels have been used mostly by mortars. These projectiles use fins in the airflow at their rear to maintain correct orientation. The primary benefits over rifled barrels is reduced barrel wear, longer ranges that can be achieved (due to the reduced loss of energy to friction and gas escaping around the projectile via the rifling) and larger explosive cores for a given caliber artillery due to less metal needing to be used to form the case of the projectile because of less force applied to the shell from the non-rifled sides of the barrel of smooth bore guns. Rifled/fin-stabilized: A combination of the above can be used, where the barrel is rifled, but the projectile also has deployable fins for stabilization, guidance or gliding. Propellant Most forms of artillery require a propellant to propel the projectile at the target. Propellant is always a low explosive, which means it deflagrates, rather than detonating like high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor. Until the late 19th century, the only available propellant was black powder. It had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant. Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant – low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Modern gun propellants are broadly divided into three classes: single-base propellants that are mainly or entirely nitrocellulose based, double-base propellants consisting of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and nitroguanidine. Artillery shells fired from a barrel can be assisted to greater range in three ways: Rocket-assisted projectiles enhance and sustain the projectile's velocity by providing additional 'push' from a small rocket motor that is part of the projectile's base. Base bleed uses a small pyrotechnic charge at the base of the projectile to introduce sufficient combustion products into the low-pressure region behind the base of the projectile responsible for a large proportion of the drag. Ramjet-assisted, similar to rocket-assisted, but using a ramjet instead of a rocket motor; it is anticipated that a ramjet-assisted 120-mm mortar shell could reach a range of . Propelling charges for barrel artillery can be provided either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Cases and bags require totally different types of breech. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech. Artillery ammunition has four classifications according to use: Service: ammunition used in live fire training or for wartime use in a combat zone. Also known as "warshot" ammunition. Practice: Ammunition with a non- or minimally-explosive projectile that mimics the characteristics (range, accuracy) of live rounds for use under training conditions. Practice artillery ammunition often utilizes a colored-smoke-generating bursting charge for marking purposes in place of the normal high-explosive charge. Dummy: Ammunition with an inert warhead, inert primer, and no propellant; used for training or display. Blank: Ammunition with live primer, greatly reduced propellant charge (typically black powder), and no projectile; used for training, demonstration or ceremonial use. Field artillery system Because modern field artillery mostly uses indirect fire, the guns have to be part of a system that enables them to attack targets invisible to them, in accordance with the combined arms plan. The main functions in the field artillery system are: Communications Command: authority to allocate resources; Target acquisition: detect, identify and deduce the location of targets; Control: authority to decide which targets to attack and allot fire units to the attack; Computation of firing data – to deliver fire from a fire unit onto its target; Fire units: guns, launchers or mortars grouped together; Specialist services: produce data to support the production of accurate firing data; Logistic services: to provide combat supplies, particularly ammunition, and equipment support. All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates. NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4). Logistics Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack. Classification Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements. Types of ordnance The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles. Types of artillery: Cannon: The oldest type of artillery with direct firing trajectory. Bombard: A type of a large calibre, muzzle-loading artillery piece, a cannon or mortar used during sieges to shoot round stone projectiles at the walls of enemy fortifications. Falconet was a type of light cannon developed in the late 15th century that fired a smaller shot than the similar falcon. Swivel gun is a type of small cannon mounted on a swiveling stand or fork which allows a very wide arc of movement. Camel mounted swivel guns called as zamburak were used by the Gunpowder Empires as self-propelled artillery. Siege artillery: Large-caliber artillery that have limited mobility with indirect firing trajectory, which was used to bombard targets at long distances. Large-calibre artillery. Field artillery: Mobile weapons used to support armies in the field. Subcategories include: Infantry support guns: Directly support infantry units. Mountain guns: Lightweight guns that can be disassembled and transported through difficult terrain. Field guns: Capable of long-range direct fires. Howitzers: Capable of high-angle fire, they are most often employed for indirect-fire. Gun-howitzers: Capable of high or low-angle fire with a longer barrel. Mortars: Typically muzzle-loaded, short-barreled, high-trajectory weapons designed primarily for an indirect-fire role. Gun-mortars: Typically breech-loaded, capable of high or low-angle fire with a longer barrel. Tank guns: Large-caliber guns mounted on tanks to provide mobile direct fire. Anti-tank artillery: Guns, usually mobile, designed primarily for direct fire to destroy armored fighting vehicles with heavy armor. Anti-tank gun: Guns designed for direct fire to destroy tanks and other armored fighting vehicles. Anti-aircraft artillery: Guns, usually mobile, designed for attacking aircraft by land and/or at sea. Some guns were suitable for the dual roles of anti-aircraft and anti-tank warfare. Rocket artillery: Launches rockets or missiles, instead of shot or shell. Railway gun: Large-caliber weapons that are mounted on, transported by and fired from specially-designed railway wagons. Naval artillery: Guns mounted on warships to be used either against other naval vessels or to bombard coastal targets in support of ground forces. The crowning achievement of naval artillery was the battleship, but the advent of air power and missiles have rendered this type of artillery largely obsolete. They are typically longer-barreled, low-trajectory, high-velocity weapons designed primarily for a direct-fire role. Coastal artillery: Fixed-position weapons dedicated to defense of a particular location, usually a coast (for example, the Atlantic Wall in World War II) or harbor. Not needing to be mobile, coastal artillery used to be much larger than equivalent field artillery pieces, giving them longer range and more destructive power. Modern coastal artillery (for example, Russia's "Bereg" system) is often self-propelled, (allowing it to avoid counter-battery fire) and fully integrated, meaning that each battery has all of the support systems that it requires (maintenance, targeting radar, etc.) organic to its unit. Aircraft artillery: Large-caliber guns mounted on attack aircraft, this is typically found on slow-flying gunships. Nuclear artillery: Artillery which fires nuclear shells. Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection. Organizational types At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships: Horse artillery, first formed as regular units in the late 18th century, with the role of supporting cavalry, they were distinguished by the entire crew being mounted. Field or "foot" artillery, the main artillery arm of the field army, using either guns, howitzers, or mortars. In World War II this branch again started using rockets and later surface to surface missiles. Fortress or garrison artillery, operated a nation's fixed defences using guns, howitzers or mortars, either on land or coastal frontiers. Some had deployable elements to provide heavy artillery to the field army. In some nations coast defence artillery was a naval responsibility. Mountain artillery, a few nations treated mountain artillery as a separate branch, in others it was a speciality in another artillery branch. They used light guns or howitzers, usually designed for pack animal transport and easily broken down into small easily handled loads Naval artillery, some nations carried pack artillery on some warships, these were used and manhandled by naval (or marine) landing parties. At times, part of a ship's armament would be unshipped and mated to makeshift carriages and limbers for actions ashore, for example during the Second Boer War, during the First World War the guns from the stricken SMS Königsberg formed the main artillery strength of the German forces in East Africa. After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles: Anti-tank artillery, also under various organisational arrangements but typically either field artillery or a specialist branch and additional elements integral to infantry, etc., units. However, in most armies field and anti-aircraft artillery also had at least a secondary anti-tank role. After World War II anti-tank in Western armies became mostly the responsibility of infantry and armoured branches and ceased to be an artillery matter, with some exceptions. Anti-aircraft artillery, under various organisational arrangements including being part of artillery, a separate corps, even a separate service or being split between army for the field and air force for home defence. In some cases infantry and the new armoured corps also operated their own integral light anti-aircraft artillery. Home defence anti-aircraft artillery often used fixed as well as mobile mountings. Some anti-aircraft guns could also be used as field or anti-tank artillery, providing they had suitable sights. However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role. Equipment types The three main types of artillery "gun" are guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies. The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century. True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun. Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately). That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipment had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels. Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading. There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations. A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry. A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s. In SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armored SPs usually carry a useful ammunition load. Early armored SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine. Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s. Caliber categories A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns. Modern operations List of countries in order of amount of artillery (only conventional barrel ordnance is given, in use with land forces): Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide fire support—"the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize or suppress the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The italicised terms are NATO's. Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering close supporting fire. However, they are all suitable for providing deep supporting fire although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to direct supporting fire. Guns are used either for this or general supporting fire while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies. Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armor units. This means they generally do not have to concentrate their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO. In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: direct support, general support, general support reinforcing and reinforcing. These tactical missions are in the context of the command authority: operational command, operational control, tactical command or tactical control. In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be concentrated onto a single target, as can the fire of units in range and with the other tactical missions. Application of fire There are several dimensions to this subject. The first is the notion that fire may be against an opportunity target or may be arranged. If it is the latter it may be either on-call or scheduled. Arranged targets may be part of a fire plan. Fire may be either observed or unobserved, if the former it may be adjusted, if the latter then it has to be predicted. Observation of adjusted fire may be directly by a forward observer or indirectly via some other target acquisition system. NATO also recognises several different types of fire support for tactical purposes: Counterbattery fire: delivered for the purpose of destroying or neutralizing the enemy's fire support system. Counterpreparation fire: intensive prearranged fire delivered when the imminence of the enemy attack is discovered. Covering fire: used to protect troops when they are within range of enemy small arms. Defensive fire: delivered by supporting units to assist and protect a unit engaged in a defensive action. Final Protective Fire: an immediately available prearranged barrier of fire designed to impede enemy movement across defensive lines or areas. Harassing fire: a random number of shells are fired at random intervals, without any pattern to it that the enemy can predict. This process is designed to hinder enemy forces' movement, and, by the constantly imposed stress, threat of losses and inability of enemy forces to relax or sleep, lowers their morale. Interdiction fire: placed on an area or point to prevent the enemy from using the area or point. Preparation fire: delivered before an attack to weaken the enemy position. These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of suppression in counterbattery is an omission. Broadly they can be defined as either: Deep supporting fire: directed at objectives not in the immediate vicinity of own force, for neutralizing or destroying enemy reserves and weapons, and interfering with enemy command, supply, communications and observation; or Close supporting fire: placed on enemy troops, weapons or positions which, because of their proximity present the most immediate and serious threat to the supported unit. Two other NATO terms also need definition: Neutralization fire: delivered to render a target temporarily ineffective or unusable; and Suppression fire: that degrades the performance of a target below the level needed to fulfill its mission. Suppression is usually only effective for the duration of the fire. The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations".Targeting is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about: what effects are required, for example, neutralization or suppression; the proximity of and risks to own troops or non-combatants; what types of munitions, including their fuzing, are to be used and in what quantities; when the targets should be attacked and possibly for how long; what methods should be used, for example, converged or distributed, whether adjustment is permissible or surprise essential, the need for special procedures such as precision or danger close how many fire units are needed and which ones they should be from those that are available (in range, with the required munitions type and quantity, not allotted to another target, have the most suitable line of fire if there is a risk to own troops or non-combatants); The targeting process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances. Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows: men standing – 1 men lying – 1/3 men firing from trenches – 1/15–1/50 men crouching in trenches – 1/25–1/100 Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used. There are several ways of making best use of this brief window of maximum vulnerability: ordering the guns to fire together, either by executive order or by a "fire at" time. The disadvantage is that if the fire is concentrated from many dispersed fire units then there will be different times of flight and the first rounds will be spread in time. To some extent a large concentration offsets the problem because it may mean that only one round is required from each gun and most of these could arrive in the 15 second window. burst fire, a rate of fire to deliver three rounds from each gun within 10 or 15 seconds, this reduces the number of guns and hence fire units needed, which means they may be less dispersed and have less variation in their times of flight. Smaller caliber guns, such as 105 mm, have always been able to deliver three rounds in 15 seconds, larger calibers firing fixed rounds could also do it but it was not until the 1970s that a multi-charge 155 mm howitzer, FH-70 first gained the capability. multiple round simultaneous impact (MRSI), where a single weapon or multiple individual weapons fire multiple rounds at differing trajectories so that all rounds arrive on target at the same time. time on target'', fire units fire at the time less their time of flight, this works well with prearranged scheduled fire but is less satisfactory for opportunity targets because it means delaying the delivery of fire by selecting a 'safe' time that all or most fire units can achieve. It can be used with both the previous two methods. Counter-battery fire Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon. Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter. Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy. Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically. Field artillery team 'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the Forward Observer (FO), the Fire Direction Center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use a(n) "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them. The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires. The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link. Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy. Time on target A technique called time on target (TOT) was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires. Multiple round simultaneous impact Multiple round simultaneous impact (MRSI) is a modern version of the earlier time on target concept. MRSI is when a single gun fires multiple shells so all arrive at the same target simultaneously. This is possible because there is more than one trajectory for a round to fly to any given target. Typically one is below 45 degrees from horizontal and the other is above it, and by using different sized propellant charges with each shell, it is possible to utilize more than two trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target. If shells are fired on higher trajectories for initial volleys (starting with the shell with the most propellant and working down) and later volleys are fired on the lower trajectories, with the correct timing the shells will all arrive at the same target simultaneously. This is useful because many more shells can land on the target with no warning. With traditional methods of firing, the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver multiple rounds in a few seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target. MRSI has a few prerequisites. The first is guns with a high rate of fire. The second is the ability to use different sized propellant charges. Third is a fire control computer that has the ability to compute MRSI volleys and the capability to produce firing data, sent to each gun, and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire. To allow the most shells to reach the target, the target has to be in range of the lowest propellant charge. Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA model 2000, and K9 Thunder. The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities. Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators. Air burst The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuzes use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuze can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuze to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuze itself was developed by the British to increase the effectiveness of anti-aircraft warfare. This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open; even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position. Use in monuments Numerous war memorials around the world incorporate an artillery piece that was used in the war or battle commemorated. See also List of artillery Advanced Gun System Artillery museums Barrage (artillery) Beehive anti-personnel round Coilgun Combustion light-gas gun Cordite Fuze Gun laying Light-gas gun Paris Gun Railgun Shoot-and-scoot Shrapnel shell Suppressive fire Improvised artillery in the Syrian Civil War References Notes Bibliography Further reading External links Naval Weapons of the World Cannon Artillery – The Voice of Freedom's Thunder Modern Artillery What sort of forensic information can be derived from the analysis of shell fragments Evans, Nigel F. (2001–2007) "British Artillery in World War 2" Artillery Tactics and Combat during the Napoleonic Wars Artillery of Napoleon's Imperial Guard French artillery and its ammunition. 14th to the end of the 19th century Historic films showing artillery in World War I at europeanfilmgateway.eu Video: Inside shrieking shrapnel. Hear the great sound of shrapnel's – Finnish field artillery fire video year 2013 Video: Forensic and archaeological interpretation of artillery shell fragments and shrapnel Chinese inventions Explosive weapons
2511
https://en.wikipedia.org/wiki/Alexanderplatz
Alexanderplatz
() () is a large public square and transport hub in the central Mitte district of Berlin. The square is named after the Russian Tsar Alexander I, which also denotes the larger neighbourhood stretching from in the north-east to and the in the south-west. is reputedly the most visited area of Berlin, beating Friedrichstrasse and City West. It is a popular starting point for tourists, with many attractions including the (TV tower), the Nikolai Quarter and the ('Red City Hall') situated nearby. is still one of Berlin's major commercial areas, housing various shopping malls, department stores and other large retail locations. History Early history to the 18th century A hospital stood at the location of present-day since the 13th century. Named (St. George), the hospital gave its name to the nearby (George Gate) of the Berlin city wall. Outside the city walls, this area was largely undeveloped until around 1400, when the first settlers began building thatched cottages. As a gallows was located close by, the area earned the nickname the ('Devil's Pleasure Garden'). The George Gate became the most important of Berlin's city gates during the 16th century, being the main entry point for goods arriving along the roads to the north and north-east of the city, for example from , and , and the big Hanseatic cities on the Baltic Sea. After the Thirty Years' War, the city wall was strengthened. From 1658 to 1683, a citywide fortress was constructed to plans by the Linz master builder, . The new fortress contained 13 bastions connected by ramparts and was preceded by a moat measuring up to wide. Within the new fortress, many of the historic city wall gates were closed. For example, the southeastern Gate was closed but the Georgian Gate remained open, making the Georgian Gate an even more important entrance to the city. In 1681, the trade of cattle and pig fattening was banned within the city. Frederick William, the Great Elector, granted cheaper plots of land, waiving the basic interest rate, in the area in front of the Georgian Gate. Settlements grew rapidly and a weekly cattle market was established on the square in front of the Gate. The area developed into a suburb – the – which continued to flourish into the late 17th century. Unlike the southwestern suburbs (, ) which were strictly and geometrically planned, the suburbs in the northeast (, and the ) proliferated without plan. Despite a building ban imposed in 1691, more than 600 houses existed in the area by 1700. At that time, the George Gate was a rectangular gatehouse with a tower. Next to the tower stood a remaining tower from the original medieval city walls. The upper floors of the gatehouse served as the city jail. A drawbridge spanned the moat and the gate was locked at nightfall by the garrison using heavy oak planks. A highway ran through the cattle market to the northeast towards . To the right stood the George chapel, an orphanage and a hospital that was donated by the Elector Sophie Dorothea in 1672. Next to the chapel stood a dilapidated medieval plague house which was demolished in 1716. Behind it was a rifleman's field and an inn, later named the . By the end of the 17th century, 600 to 700 families lived in this area. They included butchers, cattle herders, shepherds and dairy farmers. The George chapel was upgraded to the George church and received its own preacher. (1701–1805) After his coronation in on 6 May 1701 the Prussian King Frederick I entered Berlin through the George Gate. This led to the gate being renamed the King's Gate, and the surrounding area became known in official documents as (King's Gate Square). The suburb was renamed (or 'royal suburbs' short). In 1734, the Berlin Customs Wall, which initially consisted of a ring of palisade fences, was reinforced and grew to encompass the old city and its suburbs, including . This resulted in the King's Gate losing importance as an entry point for goods into the city. The gate was finally demolished in 1746. By the end of the 18th century, the basic structure of the royal suburbs of the had been developed. It consisted of irregular-shaped blocks of buildings running along the historic highways which once carried goods in various directions out of the gate. At this time, the area contained large factories (silk and wool), such as the (one of Berlin's first cloth factories, located in a former barn) and a workhouse established in 1758 for beggars and homeless people, where the inmates worked a man-powered treadmill to turn a mill. Soon, military facilities came to dominate the area, such as the 1799–1800 military parade grounds designed by David Gilly. At this time, the residents of the were mostly craftsmen, petty-bourgeois, retired soldiers and manufacturing workers. The southern part of the later was separated from traffic by trees and served as a parade ground, whereas the northern half remained a market. Beginning in the mid-18th century, the most important wool market in Germany was held in . Between 1752 and 1755, the writer lived in a house on Alexanderplatz. In 1771, a new stone bridge (the ) was built over the moat and in 1777 a colonnade-lined row of shops () was constructed by architect . Between 1783 and 1784, seven three-storey buildings were erected around the square by , including the famous , where lived as a permanent tenant and stayed in the days before his suicide. (1805–1900) On 25 October 1805 the Russian Tsar Alexander I was welcomed to the city on the parade grounds in front of the old King's Gate. To mark this occasion, on 2 November, King Frederick William III ordered the square to be renamed : In the southeast of the square, the cloth factory buildings were converted into the Theater by at a cost of 120,000 Taler. The foundation stone was laid on 31 August 1823 and the opening ceremony occurred on 4 August 1824. Sales were poor, forcing the theatre to close on 3 June 1851. Thereafter, the building was used for wool storage, then as a tenement building, and finally as an inn called until the building's demolition in 1932. During these years, was populated by fish wives, water carriers, sand sellers, rag-and-bone men, knife sharpeners and day laborers. Because of its importance as a transport hub, horse-drawn buses ran every 15 minutes between and in 1847. During the March Revolution of 1848, large-scale street fighting occurred on the streets of , where revolutionaries used barricades to block the route from to the city. Novelist and poet , who worked in the vicinity in a nearby pharmacy, participated in the construction of barricades and later described how he used materials from the Theater to barricade . The continued to grow throughout the 19th century, with three-storey developments already existing at the beginning of the century and fourth storeys being constructed from the middle of the century. By the end of the century, most of the buildings were already five storeys high. The large factories and military facilities gave way to housing developments (mainly rental housing for the factory workers who had just moved into the city) and trading houses. At the beginning of the 1870s, the Berlin administration had the former moat filled to build the Berlin city railway, which was opened in 1882 along with (' Railway Station'). In 1883–1884, the Grand Hotel, a neo-Renaissance building with 185 rooms and shops beneath was constructed. From 1886 to 1890, built the police headquarters, a huge brick building whose tower on the northern corner dominated the building. In 1890, a district court at was also established. In 1886, the local authorities built a central market hall west of the rail tracks, which replaced the weekly market on the in 1896. During the end of the 19th century, the emerging private traffic and the first horse bus lines dominated the northern part of the square, the southern part (the former parade ground) remained quiet, having green space elements added by garden director in 1889. The northwest of the square contained a second, smaller green space where, in 1895, the copper Berolina statue by sculptor was erected. Between Empire and the Nazi era (1900–1940) At the beginning of the 20th century, experienced its heyday. In 1901, founded the first German cabaret, the , in the former ('Secession stage') at , initially under the name . It was announced as " as upscale entertainment with artistic ambitions. Emperor-loyal and market-oriented stands the uncritical amusement in the foreground." The merchants , and opened large department stores on : (1904–1911), (1910–1911) and (1911). marketed itself as a department store for the Berlin people, whereas modelled itself as a department store for the world. In October 1905, the first section of the department store opened to the public. It was designed by architects and , who had already won second prize in the competition for the construction of the building. The department store underwent further construction phases and, in 1911, had a commercial space of and the longest department store façade in the world at in length. For the construction of the department store, by architects and , the were removed in 1910 and now stand in the Park in . In October 1908, the ('house of teachers') was opened next to the at . It was designed by and Henry Gross. The building belonged to the ('teachers’ association'), who rented space on the ground floor of the building out to a pastry shop and restaurant to raise funds for the association. The building housed the teachers' library which survived two world wars, and today is integrated into the library for educational historical research. The rear of the property contained the association's administrative building, a hotel for members and an exhibition hall. Notable events that took place in the hall include the funeral services for and on 2 February 1919 and, on 4 December 1920, the (Unification Party Congress) of the Communist Party and the USPD. The First Ordinary Congress of the Communist Workers Party of Germany was held in the nearby restaurant, 1–4 August 1920. 's position as a main transport and traffic hub continued to fuel its development. In addition to the three underground lines, long-distance trains and trains ran along the 's viaduct arches. Omnibuses, horse-drawn from 1877 and, after 1898, also electric-powered trams, ran out of in all directions in a star shape. The subway station was designed by Alfred Grenander and followed the colour-coded order of subway stations, which began with green at and ran through to dark red. In the Golden Twenties, was the epitome of the lively, pulsating cosmopolitan city of Berlin, rivalled in the city only by . Many of the buildings and rail bridges surrounding the platz bore large billboards that illuminated the night. The Berlin cigarette company Manoli had a famous billboard at the time which contained a ring of neon tubes that constantly circled a black ball. The proverbial "" of those years was characterized as "". Writer wrote a poem referencing the advert, and the composer Rudolf Nelson made the legendary with the dancer Lucie Berber. The writer named his novel, , after the square, and filmed parts of his 1927 film (Berlin: The Symphony of the Big City) at . Destruction of (1940–1945) One of Berlin's largest air-raid shelters during the Second World War was situated under . It was built between 1941 and 1943 for the by . The war reached in early April 1945. The Berolina statue had already been removed in 1944 and probably melted down for use in arms production. During the Battle of Berlin, Red Army artillery bombarded the area around . The battles of the last days of the war destroyed considerable parts of the historic , as well as many of the buildings around . The had entrenched itself within the tunnels of the underground system. Hours before fighting ended in Berlin on 2 May 1945, troops of the SS detonated explosives inside the north–south tunnel under the Canal to slow the advance of the Red Army towards Berlin's city centre. The entire tunnel flooded, as well as large sections of the network via connecting passages at the underground station. Many of those seeking shelter in the tunnels were killed. Of the then of subway tunnel, around were flooded with more than one million cubic meters () of water. Demolition and reconstruction (1945–1964) Before a planned reconstruction of the entire could take place, all the war ruins needed to be demolished and cleared away. A popular black market emerged within the ruined area, which the police raided several times a day. One structure demolished after World War II was the 'Rote Burg', a red brick building with round arches, previously used as police and Gestapo headquarters. The huge construction project began in 1886 and was completed in 1890; it was one of Berlin's largest buildings. The 'castle' suffered extensive damage during 1944-45 and was demolished in 1957. The site on the southwest corner of Alexanderplatz remained largely unused as a carpark until the Alexa shopping centre opened in 2007. Reconstruction planning for post-war Berlin gave priority to the dedicated space to accommodate the rapidly growing motor traffic in inner-city thoroughfares. This idea of a traffic-orientated city was already based on considerations and plans by and from the 1930s. East Germany has been subject to redevelopment several times in its history, most recently during the 1960s, when it was turned into a pedestrian zone and enlarged as part of the German Democratic Republic's redevelopment of the city centre. It is surrounded by several notable structures including the (TV Tower). During the Peaceful Revolution of 1989, the demonstration on 4 November 1989 was the largest demonstration in the history of the German Democratic Republic. Protests starting 15 October and peaked on 4 November with an estimated 200,000 participants who called on the government of the ruling Socialist Unity Party of Germany to step down and demanded a free press, the opening of the borders and their right to travel. Speakers were , , , , , and . The protests continued and culminated in the unexpected Fall of the Berlin Wall on 9 November 1989. After German reunification (1989) Ever since German reunification, has undergone a gradual process of change with many of the surrounding buildings being renovated. After the political turnaround in the wake of the fall of the Berlin Wall, socialist urban planning and architecture of the 1970s no longer corresponded to the current ideas of an inner-city square. Investors demanded planning security for their construction projects. After initial discussions with the public, the goal quickly arose to reinstate 's tram network for better connections to surrounding city quarters. In 1993, an urban planning ideas competition for architects took place to redesign the square and its surrounding area. In the first phase, there were 16 submissions, five of which were selected for the second phase of the competition. These five architects had to adapt their plans to detailed requirements. For example, the return of the Alex's trams was planned, with the implementation to be made in several stages. The winner, who was determined on 17 September 1993, was the Berlin architect . 's plan was based on Behrens’ design, provided a horseshoe-shaped area of seven- to eight-storey buildings and high towers with 42 floors. The and the – both listed buildings – would form the southwestern boundary. Second place went to the design by and . The proposal of the architecture firm Kny & Weber, which was strongly based on the horseshoe shape of Wagner, finally won the third place. The design by was chosen on 7 June 1994 by the Berlin Senate as a basis for the further transformation of . In 1993, architect 's master plan for a major redevelopment including the construction of several skyscrapers was published. In 1995, completed the renovation of the . In 1998, the first tram returned to , and in 1999, the town planning contracts for the implementation of and 's plans were signed by the landowners and the investors. 21st century On 2 April 2000, the Senate finally fixed the development plan for . The purchase contracts between investors and the Senate Department for Urban Development were signed on 23 May 2002, thus laying the foundations for the development. The CUBIX multiplex cinema (CineStar Cubix am Alexanderplatz, styled CUBIX), which opened in November 2000, joined the team of Berlin International Film Festival cinemas in 2007, and the festival shows films on three of its screens. Renovation of the department store began in 2004, led by Berlin professor of architecture, and his son . The building was enlarged by about and has since operated under the name . Beginning with the reconstruction of the department store in 2004, and the biggest underground railway station of Berlin, some buildings were redesigned and new structures built on the square's south-eastern side. Sidewalks were expanded to shrink one of the avenues, a new underground garage was built, and commuter tunnels meant to keep pedestrians off the streets were removed. Between 2005 and 2006, was renovated and later became a branch of the clothing chain, C&A. In 2005, the began work to extend the tram line from to (Alex II). This route was originally to be opened in 2000 but was postponed several times. After further delays caused by the 2006 FIFA World Cup, the route opened on 30 May 2007. In February 2006, the redesign of the walk-in plaza began. The redevelopment plans were provided by the architecture firm Gerkan, Marg and Partners and the Hamburg-based company . The final plans emerged from a design competition launched by the state of Berlin in 2004. However, the paving work was temporarily interrupted a few months after the start of construction by the 2006 FIFA World Cup and all excavation pits had to be provisionally asphalted over. The construction work could only be completed at the end of 2007. The renovation of , the largest Berlin underground station, had been ongoing since the mid-1990s and was finally completed in October 2008. The was given a pavement of yellow granite, bordered by grey mosaic paving around the buildings. Wall AG modernized the 1920s-era underground toilets at a cost of 750,000 euros. The total redesign cost amounted to around 8.7 million euros. On 12 September 2007 the Alexa shopping centre opened. It is located in the immediate vicinity of the , on the site of the old Berlin police headquarters. With a sales area, it is one of the largest shopping centres in Berlin. In May 2007, the Texas property development company Hines began building a six-story commercial building named . The building was built on a plot of , which, according to the plans, closes the square to the east and thus reduces the area of the Platz. The building was opened on 25 March 2009. At the beginning of 2007, the construction company created an underground garage with three levels below the , located between the hotel tower and the building, which cost 25 million euros to build and provides space for around 700 cars. The opening took place on 26 November 2010. At the same time, the Senate narrowed from almost wide to wide (), thus reducing it to three lanes in each direction. Behind the station, next to the CUBIX cinema in the immediate vicinity of the TV tower, the high residential and commercial building, Alea 101, was built between 2012 and 2014. As of 2014 it as assessed that due to a lack of demand the skyscrapers planned in 1993 were unlikely to be constructed. In January 2014, a 39-story residential tower designed by Frank Gehry was announced, but this project was put on hold in 2018. The area is the largest area for crime in Berlin. As of October 2017, was classified a ("crime-contaminated location") by the (General Safety and Planning Laws). Today and future plans Despite the reconstruction of the tram line crossing, it has retained its socialist character, including the much-graffitied , a popular venue. is reputedly the most visited area of Berlin, beating Friedrichstrasse and City West. It is a popular starting point for tourists, with many attractions including the (TV tower), the Nikolai Quarter and the ('Red City Hall') situated nearby. is still one of Berlin's major commercial areas, housing various shopping malls, department stores and other large retail locations. Many historic buildings are located in the vicinity of . The traditional seat of city government, the , or 'Red City Hall', is located nearby, as was the former East German parliament building, the . The was demolished from 2006–2008 to make room for a full reconstruction of the Baroque Berlin Palace, or , which is set to open in 2019. is also the name of the S-Bahn and U-Bahn stations there. It is one of Berlin's largest and most important transportation hubs, being a meeting place of three subway () lines, three lines, and many tram and bus lines, as well as regional trains. It also accommodates the Park Inn Berlin and the World Time Clock, a continually rotating installation that shows the time throughout the globe, the House of Travel, and 's (House of Teachers)'. Long-term plans exist for the demolition of the high former (now the Hotel Park-Inn), with the site to be replaced by three skyscrapers. If and when this plan will be implemented is unclear, especially since the hotel tower received a new façade as recently as in 2005, and the occupancy rates of the hotel are very good. However, the plans could give way in the next few years to a suggested high new block conversion. The previous main tenant of the development, Saturn, moved into the building in March 2009. In 2014, Primark opened a branch inside the hotel building. The majority of the planned high skyscrapers will probably never be built. The state of Berlin has announced that it will not enforce the corresponding urban development contracts against the market. Of the 13 planned skyscrapers, 10 remained as of 2008, after modifications to the plans – eight of which had construction rights. Some investors in the Alexa shopping centre announced several times since 2007 that they would sell their respective shares in the plot to an investor interested in building a high-rise building. The first concrete plans for the construction of a high-rise were made by Hines, the investor behind die mitte. In 2009, the construction of a high tower to be built behind die mitte was announced. On 12 September 2011, a slightly modified development plan was presented, which provided for a residential tower housing 400 apartments. In early 2013, the development plan was opened to the public. In autumn 2015, the Berlin Senate organized two forums in which interested citizens could express their opinions on the proposed changes to the . Architects, city planners and Senate officials held open discussions. On that occasion, however, it was reiterated that the plans for high-rise developments were not up for debate. According to the master plan of the architect , up to eleven huge buildings will continue to be built, which will house a mixture of shops and apartments. At the beginning of March 2018, it was announced that the district office had granted planning permission for the first residential high-rise in , the high Alexander Tower. On 29 of the 35 floors, 377 apartments are to be built. It would be located next to the Alexa shopping centre, with a planned completion date of 2021. Roads and public transport During the post-war reconstruction of the 1960s, was completely pedestrianized. Since then, trams were reintroduced to the area in 1998. station provides connections, access to the U2, U5 and U8 subway lines, regional train lines for DB Regio and ODEG services and, on weekends, the (HBX). Several tram and bus lines also service the area. The following main roads connect to : Northwest: (federal highways B 2 and B 5) Northeast: (B 2 and B 5) Southeast: (B 1) Southwest (in front of the station, in the pedestrian zone): Several arterial roads lead radially from to the outskirts of Berlin. These include (clockwise from north to south-east): / – – (to Bundesstraße 96a) – intersection – (main road 109 to the triangle at the ) / – (B 2) – (intersection ) – (B 2 via to the junction at ) (B 1 and B 5) – – / – (B 1 and B 5 to junction at ) Structures World Clock Berolina Fountain of Friendship The Fountain of Friendship () was erected in 1970 during the redesign of and inaugurated on October 7. It was created by and his group of artists. Its water basin has a diameter of 23 meters, it is 6.20 meters high and is built from embossed copper, glass, ceramics and enamel. The water spurts from the highest point and then flows down in spirals over 17 shells, which each have a diameter between one and four meters. After German reunification, it was completely renovated in a metal art workshop during the reconstruction of the . Other Apart from , is the only existing square in front of one of the medieval gates of Berlin's city wall. Image gallery Further reading Weszkalnys, Gisa (2010). Berlin, Alexanderplatz: Transforming Place in a Unified Germany. Berghahn Books. Alexanderplatz: Plenty of Space for Free Speech. In: Sites of Unity (Haus der Geschichte), 2022. External links Alexanderplatz – Overview of the changes References Buildings and structures completed in the 13th century 13th-century establishments in the Holy Roman Empire Articles containing video clips Mitte Squares in Berlin Zones of Berlin Cremer & Wolffenstein Alexander I of Russia Frederick William III of Prussia
2524
https://en.wikipedia.org/wiki/Airbus%20A300
Airbus A300
The Airbus A300 is Airbus' first production aircraft and the world's first twin-engine, double-aisle (wide-body) airliner, developed and manufactured by Airbus from 1971–2007. In September 1967, aircraft manufacturers in the United Kingdom, France, and West Germany signed an initial memorandum of understanding to collaborate to develop an innovative large airliner. West Germany and France reached a firm agreement on 29 May 1969, after the British withdrew from the project on 10 April 1969. The pan-European collaborative aerospace manufacturer Airbus Industrie was formally created on 18 December 1970 to develop and produce it. The A300 prototype first flew on 28 October 1972. The first twin-engine widebody airliner, the A300 typically seats 247 passengers in two classes over a range of 5,375 to 7,500 km (2,900 to 4,050 nmi; ). Initial variants are powered by General Electric CF6-50 or Pratt & Whitney JT9D turbofans and have a three-crew flight deck. The improved A300-600 has a two-crew cockpit and updated CF6-80C2 or PW4000 engines; it made its first flight on 8 July 1983 and entered service later that year. The A300 is the basis of the smaller A310 (first flown in 1982) and was adapted in a freighter version. Its cross section was retained for the larger four-engined A340 (1991) and the larger twin-engined A330 (1992). It is also the basis for the oversize Beluga transport (1994). Unlike most Airbus products, it has a yoke, not using a fly-by-wire system. Launch customer Air France introduced the type on 23 May 1974. After limited demand initially, sales took off as the type was proven in early service, beginning three decades of steady orders. It has a similar capacity to the Boeing 767-300, introduced in 1986, but lacked the 767-300ER range. During the 1990s, the A300 became popular with cargo aircraft operators, as both passenger airliner conversions and as original builds. Production ceased in July 2007 after 561 deliveries. , there are 197 A300 family aircraft still in commercial service. Development Origins During the 1960s, European aircraft manufacturers such as Hawker Siddeley and the British Aircraft Corporation, based in the UK, and Sud Aviation of France, had ambitions to build a new 200-seat airliner for the growing civil aviation market. While studies were performed and considered, such as a stretched twin-engine variant of the Hawker Siddeley Trident and an expanded development of the British Aircraft Corporation (BAC) One-Eleven, designated the BAC Two-Eleven, it was recognized that if each of the European manufacturers were to launch similar aircraft into the market at the same time, neither would achieve sales volume needed to make them viable. In 1965, a British government study, known as the Plowden Report, had found British aircraft production costs to be between 10% and 20% higher than American counterparts due to shorter production runs, which was in part due to the fractured European market. To overcome this factor, the report recommended the pursuit of multinational collaborative projects between the region's leading aircraft manufacturers. European manufacturers were keen to explore prospective programmes; the proposed 260-seat wide-body HBN 100 between Hawker Siddeley, Nord Aviation, and Breguet Aviation being one such example. National governments were also keen to support such efforts amid a belief that American manufacturers could dominate the European Economic Community; in particular, Germany had ambitions for a multinational airliner project to invigorate its aircraft industry, which had declined considerably following the Second World War. During the mid-1960s, both Air France and American Airlines had expressed interest in a short-haul twin-engine wide-body aircraft, indicating a market demand for such an aircraft to be produced. In July 1967, during a high-profile meeting between French, German, and British ministers, an agreement was made for greater cooperation between European nations in the field of aviation technology, and "for the joint development and production of an airbus". The word airbus at this point was a generic aviation term for a larger commercial aircraft, and was considered acceptable in multiple languages, including French. Shortly after the July 1967 meeting, French engineer Roger Béteille was appointed as the technical director of what would become the A300 programme, while Henri Ziegler, chief operating office of Sud Aviation, was appointed as the general manager of the organisation and German politician Franz Josef Strauss became the chairman of the supervisory board. Béteille drew up an initial work share plan for the project, under which French firms would produce the aircraft's cockpit, the control systems, and lower-centre portion of the fuselage, Hawker Siddeley would manufacture the wings, while German companies would produce the forward, rear and upper part of the center fuselage sections. Additional work included moving elements of the wings being produced in the Netherlands, and Spain producing the horizontal tail plane. An early design goal for the A300 that Béteille had stressed the importance of was the incorporation of a high level of technology, which would serve as a decisive advantage over prospective competitors. As such, the A300 would feature the first use of composite materials of any passenger aircraft, the leading and trailing edges of the tail fin being composed of glass fibre reinforced plastic. Béteille opted for English as the working language for the developing aircraft, as well against using Metric instrumentation and measurements, as most airlines already had US-built aircraft. These decisions were partially influenced by feedback from various airlines, such as Air France and Lufthansa, as an emphasis had been placed on determining the specifics of what kind of aircraft that potential operators were seeking. According to Airbus, this cultural approach to market research had been crucial to the company's long-term success. Workshare and redefinition On 26 September 1967, the British, French, and West German governments signed a Memorandum of Understanding to start development of the 300-seat Airbus A300. At this point, the A300 was only the second major joint aircraft programme in Europe, the first being the Anglo-French Concorde. Under the terms of the memorandum, Britain and France were each to receive a 37.5 per cent work share on the project, while Germany received a 25 per cent share. Sud Aviation was recognized as the lead company for A300, with Hawker Siddeley being selected as the British partner company. At the time, the news of the announcement had been clouded by the British Government's support for the Airbus, which coincided with its refusal to back BAC's proposed competitor, the BAC 2–11, despite a preference for the latter expressed by British European Airways (BEA). Another parameter was the requirement for a new engine to be developed by Rolls-Royce to power the proposed airliner; a derivative of the in-development Rolls-Royce RB211, the triple-spool RB207, capable of producing of . The programme cost was US$4.6 billion (in 1993 Dollars). In December 1968, the French and British partner companies (Sud Aviation and Hawker Siddeley) proposed a revised configuration, the 250-seat Airbus A250. It had been feared that the original 300-seat proposal was too large for the market, thus it had been scaled down to produce the A250. The dimensional changes involved in the shrink reduced the length of the fuselage by and the diameter by , reducing the overall weight by . For increased flexibility, the cabin floor was raised so that standard LD3 freight containers could be accommodated side-by-side, allowing more cargo to be carried. Refinements made by Hawker Siddeley to the wing's design provided for greater lift and overall performance; this gave the aircraft the ability to climb faster and attain a level cruising altitude sooner than any other passenger aircraft. It was later renamed the A300B. Perhaps the most significant change of the A300B was that it would not require new engines to be developed, being of a suitable size to be powered by Rolls-Royce's RB211, or alternatively the American Pratt & Whitney JT9D and General Electric CF6 powerplants; this switch was recognized as considerably reducing the project's development costs. To attract potential customers in the US market, it was decided that General Electric CF6-50 engines would power the A300 in place of the British RB207; these engines would be produced in co-operation with French firm Snecma. By this time, Rolls-Royce had been concentrating their efforts upon developing their RB211 turbofan engine instead and progress on the RB207's development had been slow for some time, the firm having suffered due to funding limitations, both of which had been factors in the engine switch decision. On 10 April 1969, a few months after the decision to drop the RB207 had been announced, the British government announced that they would withdraw from the Airbus venture. In response, West Germany proposed to France that they would be willing to contribute up to 50% of the project's costs if France was prepared to do the same. Additionally, the managing director of Hawker Siddeley, Sir Arnold Alexander Hall, decided that his company would remain in the project as a favoured sub-contractor, developing and manufacturing the wings for the A300, which would later become pivotal in later versions' impressive performance from short domestic to long intercontinental flights. Hawker Siddeley spent £35 million of its own funds, along with a further £35 million loan from the West German government, on the machine tooling to design and produce the wings. Programme launch On 29 May 1969, during the Paris Air Show, French transport minister Jean Chamant and German economics minister Karl Schiller signed an agreement officially launching the Airbus A300, the world's first twin-engine widebody airliner. The intention of the project was to produce an aircraft that was smaller, lighter, and more economical than its three-engine American rivals, the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. In order to meet Air France's demands for an aircraft larger than 250-seat A300B, it was decided to stretch the fuselage to create a new variant, designated as the A300B2, which would be offered alongside the original 250-seat A300B, henceforth referred to as the A300B1. On 3 September 1970, Air France signed a letter of intent for six A300s, marking the first order to be won for the new airliner. In the aftermath of the Paris Air Show agreement, it was decided that, in order to provide effective management of responsibilities, a Groupement d'intérêt économique would be established, allowing the various partners to work together on the project while remaining separate business entities. On 18 December 1970, Airbus Industrie was formally established following an agreement between Aérospatiale (the newly merged Sud Aviation and Nord Aviation) of France and the antecedents to Deutsche Aerospace of Germany, each receiving a 50 per cent stake in the newly formed company. In 1971, the consortium was joined by a third full partner, the Spanish firm CASA, who received a 4.2 per cent stake, the other two members reducing their stakes to 47.9 per cent each. In 1979, Britain joined the Airbus consortium via British Aerospace, which Hawker Siddeley had merged into, which acquired a 20 per cent stake in Airbus Industrie with France and Germany each reducing their stakes to 37.9 per cent. Prototype and flight testing Airbus Industrie was initially headquartered in Paris, which is where design, development, flight testing, sales, marketing, and customer support activities were centred; the headquarters was relocated to Toulouse in January 1974. The final assembly line for the A300 was located adjacent to Toulouse Blagnac International Airport. The manufacturing process necessitated transporting each aircraft section being produced by the partner companies scattered across Europe to this one location. The combined use of ferries and roads were used for the assembly of the first A300, however this was time-consuming and not viewed as ideal by Felix Kracht, Airbus Industrie's production director. Kracht's solution was to have the various A300 sections brought to Toulouse by a fleet of Boeing 377-derived Aero Spacelines Super Guppy aircraft, by which means none of the manufacturing sites were more than two hours away. Having the sections airlifted in this manner made the A300 the first airliner to use just-in-time manufacturing techniques, and allowed each company to manufacture its sections as fully equipped, ready-to-fly assemblies. In September 1969, construction of the first prototype A300 began. On 28 September 1972, this first prototype was unveiled to the public, it conducted its maiden flight from Toulouse–Blagnac International Airport on 28 October that year. This maiden flight, which was performed a month ahead of schedule, lasted for one hour and 25 minutes; the captain was Max Fischl and the first officer was Bernard Ziegler, son of Henri Ziegler. In 1972, unit cost was US$17.5M. On 5 February 1973, the second prototype performed its maiden flight. The flight test programme, which involved a total of four aircraft, was relatively problem-free, accumulating 1,580 flight hours throughout. In September 1973, as part of promotional efforts for the A300, the new aircraft was taken on a six-week tour around North America and South America, to demonstrate it to airline executives, pilots, and would-be customers. Amongst the consequences of this expedition, it had allegedly brought the A300 to the attention of Frank Borman of Eastern Airlines, one of the "big four" U.S. airlines. Entry into service On 15 March 1974, type certificates were granted for the A300 from both German and French authorities, clearing the way for its entry into revenue service. On 23 May 1974, Federal Aviation Administration (FAA) certification was received. The first production model, the A300B2, entered service in 1974, followed by the A300B4 one year later. Initially, the success of the consortium was poor, in part due to the economic consequences of the 1973 oil crisis, but by 1979 there were 81 A300 passenger liners in service with 14 airlines, alongside 133 firm orders and 88 options. Ten years after the official launch of the A300, the company had achieved a 26 per cent market share in terms of dollar value, enabling Airbus Industries to proceed with the development of its second aircraft, the Airbus A310. Design The Airbus A300 is a wide-body medium-to-long range airliner; it has the distinction of being the first twin-engine wide-body aircraft in the world. In 1977, the A300 became the first Extended Range Twin Operations (ETOPS)-compliant aircraft, due to its high performance and safety standards. Another world-first of the A300 is the use of composite materials on a commercial aircraft, which were used on both secondary and later primary airframe structures, decreasing overall weight and improving cost-effectiveness. Other firsts included the pioneering use of centre-of-gravity control, achieved by transferring fuel between various locations across the aircraft, and electrically signalled secondary flight controls. The A300 is powered by a pair of underwing turbofan engines, either General Electric CF6 or Pratt & Whitney JT9D engines; the sole use of underwing engine pods allowed for any suitable turbofan engine to be more readily used. The lack of a third tail-mounted engine, as per the trijet configuration used by some competing airliners, allowed for the wings to be located further forwards and to reduce the size of the vertical stabiliser and elevator, which had the effect of increasing the aircraft's flight performance and fuel efficiency. Airbus partners had employed the latest technology, some of which having been derived from Concorde, on the A300. According to Airbus, new technologies adopted for the airliner were selected principally for increased safety, operational capability, and profitability. Upon entry into service in 1974, the A300 was a very advanced plane, which went on to influence later airliner designs. The technological highlights include advanced wings by de Havilland (later BAE Systems) with supercritical airfoil sections for economical performance and advanced aerodynamically efficient flight control surfaces. The diameter circular fuselage section allows an eight-abreast passenger seating and is wide enough for 2 LD3 cargo containers side by side. Structures are made from metal billets, reducing weight. It is the first airliner to be fitted with wind shear protection. Its advanced autopilots are capable of flying the aircraft from climb-out to landing, and it has an electrically controlled braking system. Later A300s incorporated other advanced features such as the Forward-Facing Crew Cockpit (FFCC), which enabled a two-pilot flight crew to fly the aircraft alone without the need for a flight engineer, the functions of which were automated; this two-man cockpit concept was a world-first for a wide-body aircraft. Glass cockpit flight instrumentation, which used cathode ray tube (CRT) monitors to display flight, navigation, and warning information, along with fully digital dual autopilots and digital flight control computers for controlling the spoilers, flaps, and leading-edge slats, were also adopted upon later-built models. Additional composites were also made use of, such as carbon-fibre-reinforced polymer (CFRP), as well as their presence in an increasing proportion of the aircraft's components, including the spoilers, rudder, air brakes, and landing gear doors. Another feature of later aircraft was the addition of wingtip fences, which improved aerodynamic performance and thus reduced cruise fuel consumption by about 1.5% for the A300-600. In addition to passenger duties, the A300 became widely used by air freight operators; according to Airbus, it is the best-selling freight aircraft of all time. Various variants of the A300 were built to meet customer demands, often for diverse roles such as aerial refueling tankers, freighter models (new-build and conversions), combi aircraft, military airlifter, and VIP transport. Perhaps the most visually unique of the variants is the A300-600ST Beluga, an oversized cargo-carrying model operated by Airbus to carry aircraft sections between their manufacturing facilities. The A300 was the basis for, and retained a high level of commonality with, the second airliner produced by Airbus, the smaller Airbus A310. Operational history On 23 May 1974, the first A300 to enter service performed the first commercial flight of the type, flying from Paris to London, for Air France. Immediately after the launch, sales of the A300 were weak for some years, with most orders going to airlines that had an obligation to favor the domestically made product – notably Air France and Lufthansa, the first two airlines to place orders for the type. Following the appointment of Bernard Lathière as Henri Ziegler's replacement, an aggressive sales approach was adopted. Indian Airlines was the world's first domestic airline to purchase the A300, ordering three aircraft with three options. However, between December 1975 and May 1977, there were no sales for the type. During this period a number of "whitetail" A300s – completed but unsold aircraft – were completed and stored at Toulouse, and production fell to half an aircraft per month amid calls to pause production completely. During the flight testing of the A300B2, Airbus held a series of talks with Korean Air on the topic of developing a longer-range version of the A300, which would become the A300B4. In September 1974, Korean Air placed an order for four A300B4s with options for two further aircraft; this sale was viewed as significant as it was the first non-European international airline to order Airbus aircraft. Airbus had viewed South-East Asia as a vital market that was ready to be opened up and believed Korean Air to be the 'key'. Airlines operating the A300 on short-haul routes were forced to reduce frequencies to try and fill the aircraft. As a result, they lost passengers to airlines operating more frequent narrow-body flights. Eventually, Airbus had to build its own narrowbody aircraft (the A320) to compete with the Boeing 737 and McDonnell Douglas DC-9/MD-80. The saviour of the A300 was the advent of ETOPS, a revised FAA rule which allows twin-engine jets to fly long-distance routes that were previously off-limits to them. This enabled Airbus to develop the aircraft as a medium/long-range airliner. In 1977, US carrier Eastern Air Lines leased four A300s as an in-service trial. CEO Frank Borman was impressed that the A300 consumed 30% less fuel, even less than expected, than his fleet of L-1011s. Borman proceeded to order 23 A300s, becoming the first U.S. customer for the type. This order is often cited as the point at which Airbus came to be seen as a serious competitor to the large American aircraft-manufacturers Boeing and McDonnell Douglas. Aviation author John Bowen alleged that various concessions, such as loan guarantees from European governments and compensation payments, were a factor in the decision as well. The Eastern Air Lines breakthrough was shortly followed by an order from Pan Am. From then on, the A300 family sold well, eventually reaching a total of 561 delivered aircraft. In December 1977, Aerocondor Colombia became the first Airbus operator in Latin America, leasing one Airbus A300B4-2C, named Ciudad de Barranquilla. During the late 1970s, Airbus adopted a so-called 'Silk Road' strategy, targeting airlines in the Far East. As a result, The aircraft found particular favor with Asian airlines, being bought by Japan Air System, Korean Air, China Eastern Airlines, Thai Airways International, Singapore Airlines, Malaysia Airlines, Philippine Airlines, Garuda Indonesia, China Airlines, Pakistan International Airlines, Indian Airlines, Trans Australia Airlines and many others. As Asia did not have restrictions similar to the FAA 60-minutes rule for twin-engine airliners which existed at the time, Asian airlines used A300s for routes across the Bay of Bengal and South China Sea. In 1977, the A300B4 became the first ETOPS compliant aircraft, qualifying for Extended Twin Engine Operations over water, providing operators with more versatility in routing. In 1982 Garuda Indonesia became the first airline to fly the A300B4-200FFCC with the newly Forward-Facing Crew Cockpit concept, the world's first wide-body aircraft that only operated by two-man cockpit crew. By 1981, Airbus was growing rapidly, with over 400 aircraft sold to over forty airlines. In 1989, Chinese operator China Eastern Airlines received its first A300; by 2006, the airline operated around 18 A300s, making it the largest operator of both the A300 and the A310 at that time. On 31 May 2014, China Eastern officially retired the last A300-600 in its fleet, having begun drawing down the type in 2010. From 1997 to 2014, a single A300, designated A300 Zero-G, was operated by the European Space Agency (ESA), centre national d'études spatiales (CNES) and the German Aerospace Center (DLR) as a reduced-gravity aircraft for conducting research into microgravity; the A300 is the largest aircraft to ever have been used in this capacity. A typical flight would last for two and a half hours, enabling up to 30 parabolas to be performed per flight. By the 1990s, the A300 was being heavily promoted as a cargo freighter. The largest freight operator of the A300 is FedEx Express, which has 65 A300 aircraft in service as of May 2022. UPS Airlines also operates 52 freighter versions of the A300. The final version was the A300-600R and is rated for 180-minute ETOPS. The A300 has enjoyed renewed interest in the secondhand market for conversion to freighters; large numbers were being converted during the late 1990s. The freighter versions – either new-build A300-600s or converted ex-passenger A300-600s, A300B2s and B4s – account for most of the world's freighter fleet after the Boeing 747 freighter. The A300 provided Airbus the experience of manufacturing and selling airliners competitively. The basic fuselage of the A300 was later stretched (A330 and A340), shortened (A310), or modified into derivatives (A300-600ST Beluga Super Transporter). In 2006, unit cost of an −600F was $105 million. In March 2006, Airbus announced the impending closure of the A300/A310 final assembly line, making them the first Airbus aircraft to be discontinued. The final production A300, an A300F freighter, performed its initial flight on 18 April 2007, and was delivered to FedEx Express on 12 July 2007. Airbus has announced a support package to keep A300s flying commercially. Airbus offers the A330-200F freighter as a replacement for the A300 cargo variants. The life of UPS's fleet of 52 A300s, delivered from 2000 to 2006, will be extended to 2035 by a flight deck upgrade based around Honeywell Primus Epic avionics; new displays and flight management system (FMS), improved weather radar, a central maintenance system, and a new version of the current enhanced ground proximity warning system. With a light usage of only two to three cycles per day, it will not reach the maximum number of cycles by then. The first modification will be made at Airbus Toulouse in 2019 and certified in 2020. As of July 2017, there are 211 A300s in service with 22 operators, with the largest operator being FedEx Express with 68 A300-600F aircraft. Variants A300B1 The A300B1 was the first variant to take flight. It had a maximum takeoff weight (MTOW) of , was long and was powered by two General Electric CF6-50A engines. Only two prototypes of the variant were built before it was adapted into the A300B2, the first production variant of the airliner. The second prototype was leased to Trans European Airways in 1974. A300B2 A300B2-100 Responding to a need for more seats from Air France, Airbus decided that the first production variant should be larger than the original prototype A300B1. The CF6-50A powered A300B2-100 was longer than the A300B1 and had an increased MTOW of , allowing for 30 additional seats and bringing the typical passenger count up to 281, with capacity for 20 LD3 containers. Two prototypes were built and the variant made its maiden flight on 28 June 1973, became certified on 15 March 1974 and entered service with Air France on 23 May 1974. A300B2-200 For the A300B2-200, originally designated as the A300B2K, Krueger flaps were introduced at the leading-edge root, the slat angles were reduced from 20 degrees to 16 degrees, and other lift related changes were made in order to introduce a high-lift system. This was done to improve performance when operating at high-altitude airports, where the air is less dense and lift generation is reduced. The variant had an increased MTOW of and was powered by CF6-50C engines, was certified on 23 June 1976, and entered service with South African Airways in November 1976. CF6-50C1 and CF6-50C2 models were also later fitted depending on customer requirements, these became certified on 22 February 1978 and 21 February 1980 respectively. A300B2-320 The A300B2-320 introduced the Pratt & Whitney JT9D powerplant and was powered by JT9D-59A engines. It retained the MTOW of the B2-200, was certified on 4 January 1980, and entered service with Scandinavian Airlines on 18 February 1980, with only four being produced. A300B4 A300B4-100 The initial A300B4 variant, later named the A300B4-100, included a centre fuel tank for an increased fuel capacity of , and had an increased MTOW of . It also featured Krueger flaps and had a similar high-lift system to what was later fitted to the A300B2-200. The variant made its maiden flight on 26 December 1974, was certified on 26 March 1975, and entered service with Germanair in May 1975. A300B4-200 The A300B4-200 had an increased MTOW of and featured an additional optional fuel tank in the rear cargo hold, which would reduce the cargo capacity by two LD3 containers. The variant was certified on 26 April 1979. A300-600 The A300-600, officially designated as the A300B4-600, was slightly longer than the A300B2 and A300B4 variants and had an increased interior space from using a similar rear fuselage to the Airbus A310, this allowed it to have two additional rows of seats. It was initially powered by Pratt & Whitney JT9D-7R4H1 engines, but was later fitted with General Electric CF6-80C2 engines, with Pratt & Whitney PW4156 or PW4158 engines being introduced in 1986. Other changes include an improved wing featuring a recambered trailing edge, the incorporation of simpler single-slotted Fowler flaps, the deletion of slat fences, and the removal of the outboard ailerons after they were deemed unnecessary on the A310. The variant made its first flight on 8 July 1983, was certified on 9 March 1984, and entered service in June 1984 with Saudi Arabian Airlines. A total of 313 A300-600s (all versions) have been sold. The A300-600 uses the A310 cockpits, featuring digital technology and electronic displays, eliminating the need for a flight engineer. The FAA issues a single type rating which allows operation of both the A310 and A300-600. A300-600: (Official designation: A300B4-600) The baseline model of the −600 series. A300-620C: (Official designation: A300C4-620) A convertible-freighter version. Four delivered between 1984 and 1985. A300-600F: (Official designation: A300F4-600) The freighter version of the baseline −600. A300-600R: (Official designation: A300B4-600R) The increased-range −600, achieved by an additional trim fuel tank in the tail. First delivery in 1988 to American Airlines; all A300s built since 1989 (freighters included) are −600Rs. Japan Air System (later merged into Japan Airlines) took delivery of the last new-built passenger A300, an A300-622R, in November 2002. A300-600RC: (Official designation: A300C4-600R) The convertible-freighter version of the −600R. Two were delivered in 1999. A300-600RF: (Official designation: A300F4-600R) The freighter version of the −600R. All A300s delivered between November 2002 and 12 July 2007 (last ever A300 delivery) were A300-600RFs. A300B10 (A310) Airbus had demand for an aircraft smaller than the A300. On 7 July 1978, the A310 (initially the A300B10) was launched with orders from Swissair and Lufthansa. On 3 April 1982, the first prototype conducted its maiden flight and it received its type certification on 11 March 1983. Keeping the same eight-abreast cross-section, the A310 is shorter than the initial A300 variants, and has a smaller wing, down from . The A310 introduced a two-crew glass cockpit, later adopted for the A300-600 with a common type rating. It was powered by the same GE CF6-80 or Pratt & Whitney JT9D then PW4000 turbofans. It can seat 220 passengers in two classes, or 240 in all-economy, and can fly up to . It has overwing exits between the two main front and rear door pairs. In April 1983, the aircraft entered revenue service with Swissair and competed with the Boeing 767–200, introduced six months before. Its longer range and ETOPS regulations allowed it to be operated on transatlantic flights. Until the last delivery in June 1998, 255 aircraft were produced, as it was succeeded by the larger Airbus A330-200. It has cargo aircraft versions, and was derived into the Airbus A310 MRTT military tanker/transport. A300-600ST Commonly referred to as the Airbus Beluga or "Airbus Super Transporter," these five airframes are used by Airbus to ferry parts between the company's disparate manufacturing facilities, thus enabling workshare distribution. They replaced the four Aero Spacelines Super Guppys previously used by Airbus. ICAO code: A3ST Operators , there are 197 A300 family aircraft in commercial service. The five largest operators were FedEx Express (70), UPS Airlines (52), European Air Transport Leipzig (23), Iran Air (11), and Mahan Air (11). Deliveries Data through end of December 2007. Accidents and incidents As of June 2021, the A300 has been involved in 77 occurrences including 24 hull-loss accidents causing 1133 fatalities, and criminal occurrences and hijackings causing fatalities. Accidents with fatalities 21 September 1987: EgyptAir Airbus A300B4-203 touched down past the runway threshold. The right main gear hit runway lights and the aircraft collided with an antenna and fences. No passengers were on board the plane, but 5 crew members were killed. 28 September 1992: PIA Flight 268, an A300B4 crashed on approach near Kathmandu, Nepal. All 12 crew and 155 passengers perished. 26 April 1994: China Airlines Flight 140 (Taiwan) crashed at the end of runway at Nagoya, Japan, killing all 15 crew and 249 of 256 passengers on board. 26 September 1997: Garuda Indonesia Flight 152 was on approach to Polonia International Airport in Medan. The plane later crashed into a ravine in Buah Nabar due to ATC error and apparent haze that covers the country which limits the visibility. All 234 passengers and crew aboard perished in Indonesia's deadliest crash. 16 February 1998: China Airlines Flight 676 (Taiwan) crashed into a residential area close to CKS International Airport near Taipei, Taiwan. All 196 people on board were killed, including Taiwan's central bank president. Seven people on the ground were also killed. 12 November 2001: American Airlines Flight 587 crashed into Belle Harbor—a neighbourhood in Queens, New York, United States—shortly after takeoff from John F. Kennedy International Airport. The vertical stabiliser ripped off the aircraft after the rudder was mishandled during wake turbulence. All 260 people on board were killed, along with 5 people on the ground. It is the second-deadliest incident involving an A300 to date and the second-deadliest aircraft incident on United States soil. 14 April 2010: AeroUnion Flight 302, an A300B4-203F, crashed on a road short of the runway while attempting to land at Monterrey Airport in Mexico. Seven people (five crew members and two on the ground) were killed. 14 August 2013: UPS Flight 1354, an Airbus A300F4-622R, crashed outside the perimeter fence on approach to Birmingham–Shuttlesworth International Airport in Birmingham, Alabama, United States. Both crew members died. Hull losses 18 December 1983: Malaysian Airline System Flight 684, an Airbus A300B4 leased from Scandinavian Airlines System (SAS), registration OY-KAA, crashed short of the runway at Kuala Lumpur in bad weather while attempting to land on a flight from Singapore. All 247 persons aboard escaped unharmed but the aircraft was destroyed in the resulting fire. 24 April 1993: an Air Inter Airbus A300B2-1C was written off after colliding with a light pole while being pushed back at Montpellier. In November 1993, an Indian Airlines A300 plane crash landed near Hyderabad airport. There were no deaths but the aircraft was written off. 10 August 1994 – Korean Air Flight 2033 (Airbus A300) from Seoul to Jeju, the flight approached faster than usual to avoid potential windshear. Fifty feet above the runway the co-pilot, who was not flying the aircraft, decided that there was insufficient runway left to land and tried to perform a go-around against the captain's wishes.[18] The aircraft touched down 1,773 meters beyond the runway threshold. The aircraft could not be stopped on the remaining 1,227 meters of runway and overran at a speed of 104 knots. After striking the airport wall and a guard post at 30 knots, the aircraft burst into flames and was incinerated. The cabin crew was credited with safely evacuating all passengers although only half of the aircraft's emergency exits were usable} 1 March 2004, Pakistan International Airlines Flight 2002 burst 2 tyres whilst taking off from King Abdulaziz International Airport. Fragments of the tyre were ingested by the engines, this caused the engines to catch fire and an aborted takeoff was performed. Due to the fire substantial damage to the engine and the left wing caused the aircraft to be written off. All 261 passengers and 12 crew survived. 16 November 2012: an Air Contractors Airbus A300B4-203(F) EI-EAC, operating flight QY6321 on behalf of EAT Leipzig from Leipzig (Germany) to Bratislava (Slovakia), suffered a nose wheel collapse during roll out after landing at Bratislava's M. R. Štefánik Airport. All three crew members survived unharmed, the aircraft was written off. As of December 2017, the aircraft still was parked at a remote area of the airport between runways 13 and 22. 12 October 2015: An Airbus A300B4-200F Freighter operated by Egyptian Tristar cargo carrier crashed in Mogadishu, Somalia. All the passengers and crew members survived the crash. 1 October 2016: An Airbus A300-B4 registration PR-STN on a cargo flight between São Paulo-Guarulhos and Recife suffered a runway excursion after landing and the aft gear collapsed upon touchdown. Violent incidents 27 June 1976: Air France Flight 139, originating in Tel Aviv, Israel and carrying 248 passengers and a crew of 12 took off from Athens, Greece, headed for Paris, France. The flight was hijacked by terrorists, and was eventually flown to Entebbe Airport in Uganda. At the airport, Israeli commandos rescued 102 of the 106 hostages. 26 October 1986: Thai Airways Flight 620, an Airbus A300B4-601, originating in Bangkok suffered an explosion mid-flight. The aircraft descended rapidly and was able to land safely at Osaka. The aircraft was later repaired and there were no fatalities. The cause was a hand grenade brought onto the plane by a Japanese gangster of the Yamaguchi-gumi. 62 of the 247 people on board were injured. 3 July 1988: Iran Air Flight 655 was shot down by USS Vincennes in the Persian Gulf after being mistaken for an attacking Iranian F-14 Tomcat, killing all 290 passengers and crew. 15 February 1991: two Kuwait Airways A300C4-620s and two Boeing 767s that had been seized during Iraq's occupation of Kuwait were destroyed in coalition bombing of Mosul Airport. 24 December 1994: Air France Flight 8969 was hijacked at Houari Boumedienne Airport in Algiers, by four terrorists who belonged to the Armed Islamic Group. The terrorists apparently intended to crash the plane over the Eiffel Tower on Boxing Day. After a failed attempt to leave Marseille following a confrontational firefight between the terrorists and the GIGN French Special Forces, the result was the death of all four terrorists. (Snipers on the terminal front's roof shot dead two of the terrorists. The other two terrorists died as a result of gunshots in the cabin after approximately 20 minutes.) Three hostages including a Vietnamese diplomat were executed in Algiers, 229 hostages survived, many of them wounded by shrapnel. The almost 15-year-old aircraft was written off. 24 December 1999: Indian Airlines Flight IC 814 from Kathmandu, Nepal, to New Delhi was hijacked. After refuelling and offloading a few passengers, the flight was diverted to Kandahar, Afghanistan. A Nepalese man was murdered while the plane was in flight. 22 November 2003: European Air Transport OO-DLL, operating on behalf of DHL Aviation, was hit by an SA-14 'Gremlin' missile after takeoff from Baghdad International Airport. The aeroplane lost hydraulic pressure and thus the controls. After extending the landing gear to create more drag, the crew piloted the plane using differences in engine thrust and landed the plane with minimal further damage. The plane was repaired and offered for sale, but in April 2011 it still remained parked at Baghdad Intl. 25 August 2011: an A300B4-620 5A-IAY of Afriqiyah Airways and A300B4-622 5A-DLZ of Libyan Arab Airlines were both destroyed in fighting between pro- and anti-Gadaffi forces at Tripoli International Airport. Aircraft on display Fourteen A300s are currently preserved: F-BUAD Airbus A300 ZERO-G, since August 2015 preserved at Cologne Bonn Airport, Germany. ex-HL7219 Korean Air Airbus A300B4 preserved at Korean Air Jeongseok Airfield. ex-N11984 Continental Airlines Airbus A300B4 preserved in South Korea as a Night Flight Restaurant. ex TC-ACD and TC-ACE Air ACT, preserved as coffee house at Uçak Cafe in Burhaniye, Turkey. ex TC-MNJ MNG Airlines, preserved as Köfte Airlines restaurant at Tekirdağ, Turkey. ex TC-FLA Fly Air, preserved as the Airbus Cafe & Restaurant at Kayseri, Turkey. ex TC-ACC Air ACT, preserved as the Uçak Kütüphane library and education centre at Çankırı, Turkey. ex EP-MHA Mahan Air, preserved as instructional airframe at the Botia Mahan Aviation College at Kerman, Iran. ex TC-FLM Fly Air, preserved as a restaurant at Istanbul, Turkey. ex B-18585 China Airlines, preserved as the Flight of Happiness restaurant at Taoyuan, Taiwan. ex-PK-JID Sempati Air Airbus A300B4 repainted in first A300B1 prototype colours, including original F-WUAB registration, became an exhibit in 2014 at the Aeroscopia museum in Blagnac, near Toulouse, France. ex TC-MCE MNG Airlines, preserved as a restaurant at the Danialand theme park at Agadir, Morocco. ex HL7240 Korean Air, preserved as instructional airframe (gate guard) at the Korea Aerospace University at Goyang, South Korea. Specifications See also References Further reading External links A300 1970s international airliners Twinjets Articles containing video clips Low-wing aircraft Aircraft first flown in 1972 Wide-body aircraft
2546
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
Automated theorem proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science. Logical foundations While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the first-order theory of the natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements that cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples of undecidable questions. First implementations Shortly after World War II, the first general-purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum-tube computer at the Institute for Advanced Study in Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theorist in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theorist constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia. The "heuristic" approach of the Logic Theorist tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious. Decidability of the problem Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the common case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first-order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the semantically valid well-formed formulas, so the valid formulas are computably enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized. The above applies to first-order theories, such as Peano arithmetic. However, for a specific model that may be described by a first-order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any consistent theory whose axioms are true for the natural numbers cannot prove all first-order statements true for the natural numbers, even if the list of axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first-order theory (such as the integers). Related problems A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial, and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems that use model checking as an inference rule. There are also programs that were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof that was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by the first player. Industrial uses Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors. First-order theorem proving In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as higher-order logics, allow the convenient expression of a wider range of problems than first-order logic, but theorem proving for these logics is less well developed. Benchmarks, competitions, and sources The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples—the Thousands of Problems for Theorem Provers (TPTP) Problem Library—as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. E is a high-performance prover for full first-order logic, but built on a purely equational calculus, originally developed in the automated reasoning group of Technical University of Munich under the direction of Wolfgang Bibel, and now at Baden-Württemberg Cooperative State University in Stuttgart. Otter, developed at the Argonne National Laboratory, is based on first-order resolution and paramodulation. Otter has since been replaced by Prover9, which is paired with Mace4. SETHEO is a high-performance system based on the goal-directed model elimination calculus, originally developed by a team under direction of Wolfgang Bibel. E and SETHEO have been combined (with other systems) in the composite theorem prover E-SETHEO. Vampire was originally developed and implemented at Manchester University by Andrei Voronkov and Kryštof Hoder. It is now developed by a growing international team. It has won the FOF division (among other divisions) at the CADE ATP System Competition regularly since 2001. Waldmeister is a specialized system for unit-equational first-order logic developed by Arnim Buch and Thomas Hillenbrand. It won the CASC UEQ division for fourteen consecutive years (1997–2010). SPASS is a first-order logic theorem prover with equality. This is developed by the research group Automation of Logic, Max Planck Institute for Computer Science. The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. Popular techniques First-order resolution with unification Model elimination Method of analytic tableaux Superposition and term rewriting Model checking Mathematical induction Binary decision diagrams DPLL Higher-order unification Quantifier elimination Software systems Free software Alt-Ergo Automath CVC E IsaPlanner LCF Mizar NuPRL Paradox Prover9 PVS SPARK (programming language) Twelf Z3 Theorem Prover Proprietary software CARINE Wolfram Mathematica ResearchCyc See also Curry–Howard correspondence Symbolic computation Ramanujan machine Computer-aided proof Formal verification Logic programming Proof checking Model checking Proof complexity Computer algebra system Program analysis (computer science) General Problem Solver Metamath language for formalized mathematics Notes References II . External links A list of theorem proving tools Formal methods
2547
https://en.wikipedia.org/wiki/Agent%20Orange
Agent Orange
Agent Orange is a chemical herbicide and defoliant, one of the tactical use Rainbow Herbicides. It was used by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. In addition to its damaging environmental effects, traces of dioxin (mainly TCDD, the most toxic of its type) found in the mixture have caused major health problems for many individuals who were exposed, and their offspring. Agent Orange was produced in the United States from the late 1940s and was used in industrial agriculture, and was also sprayed along railroads and power lines to control undergrowth in forests. During the Vietnam War, the U.S. military procured over , consisting of a fifty-fifty mixture of 2,4-D and dioxin-contaminated 2,4,5-T. Nine chemical companies produced it: Dow Chemical Company, Monsanto Company, Diamond Shamrock Corporation, Hercules Inc., Thompson Hayward Chemical Co., United States Rubber Company (Uniroyal), Thompson Chemical Co., Hoffman-Taff Chemicals, Inc., and Agriselect. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Vietnamese Red Cross estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable, while documenting cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed U.S. military veterans. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel as a result of Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over or of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity is sharply reduced in contrast with unsprayed areas. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages. Agent Orange was first used by the British Armed Forces in Malaya during the Malayan Emergency. It was also used by the U.S. military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. Chemical composition The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). TCDD was a trace (typically 2-3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange. Toxicology TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to enter the body readily through physical contact or ingestion. Dioxins accumulate easily in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the nucleus, where it influences gene expression. According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful. Development Several herbicides were developed as part of efforts by the United States and the United Kingdom to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33). In 1943, the United States Department of the Army contracted botanist (and later bioethicist) Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois Urbana-Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued. In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S. testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly. Early use In Malaya, the local unit of Imperial Chemical Industries researched defoliants as weed killers for rubber plantations. Roadside ambushes by the Malayan National Liberation Army were a danger to the British military during the Malayan Emergency (1948–1960), several trials were made to defoliate vegetation that might hide ambush sites, but hand removal was found cheaper. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E.K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H.G.H. Kearns of the University of Bristol. After the Malayan Emergency ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya. Use in the Vietnam War In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to help defoliate the lush jungle that was providing cover to his Communist enemies. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. Many U.S. officials supported herbicide operations, pointing out that the British had already used herbicides and defoliants in Malaya during the 1950s. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam. The herbicide operations were formally directed by the government of South Vietnam. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters and possible ambush sites along roads and canals. Samuel P. Huntington argued that the program was also a part of a policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers. Altogether, over of Agent Orange were applied. The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA. The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period. 3.2% of South Vietnam's cultivated land was sprayed at least once between 1965 and 1971. 90% of herbicide use was directed at defoliation. The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. In 1965, members of the U.S. Congress were told, "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." The first official acknowledgment of the programs came from the State Department in March 1966. When crops were destroyed, the Viet Cong would compensate for the loss of food by confiscating more food from local villages. Some military personnel reported being told they were destroying crops used to feed guerrillas, only to later discover, most of the destroyed food was actually produced to support the local civilian population. For example, according to Wil Verwey, 85% of the crop lands in Quang Ngai province were scheduled to be destroyed in 1970 alone. He estimated this would have caused famine and left hundreds of thousands of people without food or malnourished in the province. According to a report by the American Association for the Advancement of Science, the herbicide campaign had disrupted the food supply of more than 600,000 people by 1970. Many experts at the time, including Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons in international conflicts. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then the United Kingdom and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, the United Kingdom commented on the draft Resolution 2603 (XXIV): "The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law." The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. A study carried out by the Bionetic Research Laboratories between 1965 and 1968 found malformations in test animals caused by 2,4,5-T, a component of Agent Orange. The study was later brought to the attention of the White House in October 1969. Other studies reported similar results and the Department of Defense began to reduce the herbicide operation. On April 15, 1970, it was announced that the use of Agent Orange was suspended. Two brigades of the Americal Division in the summer of 1970 continued to use Agent Orange for crop destruction in violation of the suspension. An investigation led to disciplinary action against the brigade and division commanders because they had falsified reports to hide its use. Defoliation and crop destruction were completely stopped by June 30, 1971. Health effects There are various types of cancer associated with Agent Orange, including chronic B-cell leukemia, Hodgkin's lymphoma, multiple myeloma, non-Hodgkin's lymphoma, prostate cancer, respiratory cancer, lung cancer, and soft tissue sarcomas. Vietnamese people The government of Vietnam states that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to Agent Orange contamination. The United States government has challenged these figures as being unreliable. According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases. In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies. Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate. U.S. veterans While in Vietnam, US-allied soldiers were told not to worry about agent orange and were persuaded the chemical was harmless. After returning home, Vietnam veterans began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge. In order to qualify for compensation, veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S. By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam. In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam and 51% said they supported compensation for Vietnamese Agent Orange victims. National Academy of Medicine Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled "Veterans and Agent Orange: Update 2014." The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange. The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence." At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report "Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft" confirmed it. U.S. Public Health Service Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment. Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims. Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-p-dioxin. U.S. Veterans of Laos and Cambodia During the Vietnam War, the United States fought the North Vietnamese, and their allies, in Laos and Cambodia, including heavy bombing campaigns. They also sprayed large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped in Laos and in Cambodia. Because Laos and Cambodia were both officially neutral during the Vietnam War, the U.S. attempted to keep secret its military operations in those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. One noteworthy exception, according to the U.S. Department of Labor, is a claim filed with the CIA by an employee of "a self-insured contractor to the CIA that was no longer in business." The CIA advised the Department of Labor that it "had no objections" to paying the claim and Labor accepted the claim for payment: Ecological impact About 17.8% or of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were, respectively, 145 and 170 species of birds and 30 and 55 species of mammals. Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases. Sociopolitical impact American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's Memorandum 5446-ISA/ARPA states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange and areas were bulldozed clear of vegetation forcing many rural civilians to cities. Legal and diplomatic proceedings International The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. Article 2(4) of Protocol III of the Convention on Certain Conventional Weapons contains the "Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange, and it has been argued that it was clearly designed to cover situations like U.S. tactics in Vietnam. Class action lawsuit Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government. In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction. The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700. In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects." New Jersey Agent Orange Commission In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine trace dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the exposed group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Soldiers, Marines, and Brownwater Naval personnel. U.S. Congress In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA expired in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included. In 2011, an appraisal of the 20-year long Air Force Health Study that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange". The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses. U.S.–Vietnamese government negotiations In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held. Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin. A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011. Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea. Vietnamese victims class action lawsuit in U.S. courts On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international humanitarian law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. In the dismissal statement issued by Weinstein, he wrote "The prohibition extended only to gases deployed for their asphyxiating or toxic effects on man, not to herbicides designed to affect plants that may have unintended harmful side-effects on people." Author and activist George Jackson had written previously that "if the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to the United Kingdom's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare." The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity. The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals. Help for those affected in Vietnam To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi. The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange. The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need. Vuong Mo of the Vietnam News Agency described one of the centers: May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful. On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy. Use outside of Vietnam Australia In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state. Canada The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence. In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe. On February 17, 2011, the Toronto Star revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The Toronto Star reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the Toronto Star article, the Ontario provincial government launched a probe into the use of Agent Orange. Guam An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Andersen Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam. South Korea Agent Orange was used in South Korea in the late 1960s and in 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims". In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 that the United States Army had buried 250 55-gallon drums () of Agent Orange in Camp Carroll, the U.S. Army base located in Gyeongsangbuk-do, South Korea. Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in South Korea and are able to prove they were assigned to certain specified around the Korean Demilitarized Zone, during a specific time frame are afforded similar presumption. New Zealand The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of herbicide used in Agent Orange which has been alleged at various times to have been exported for use in the Vietnam War and to other users by the Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted. However, the agriscience company Corteva (which split from DowDupont in 2019) agreed to clean up the Paritutu site in September 2022. There are cases of New Zealand soldiers developing cancers such as bone cancer, but none has been scientifically connected to exposure to herbicides. Philippines Herbicide persistence studies of Agents Orange and White were conducted in the Philippines. Johnston Atoll The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship MV Transpacific for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi. Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport. From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship . As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat. Okinawa, Japan There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013. In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums () of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa. Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa. Thailand Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits. A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides. United States The University of Hawaii has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Hawaii Island in 1966 and on Kaua'i Island in 1967–1968; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs. In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015. In 1978, the EPA suspended spraying of Agent Orange in national forests. Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were sprayed with Agent Orange along power lines throughout the National Forest. In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, contaminating a stretch of riverbed in one of New Jersey's most populous areas. A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s. Cleanup programs In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees. On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Da Nang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai is a hotspot for dioxin and so is Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper Nhân Dân, the U.S. government provided $41 million to the project. As of 2017, some of soil have been cleaned. The Seabee's Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was about in size and was still being cleaned up in 2013. In 2016, the EPA laid out its plan for cleaning up an stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over , the destruction process is energy intensive. See also Environmental impact of war Orange Crush (song) Rainbow herbicides Scorched earth Teratology Vietnam Syndrome Notes References NTP (National Toxicology Program); "Toxicology and Carcinogenesis Studies of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) in Female Harlan Sprague-Dawley Rats (Gavage Studies)", CASRN 1746-01-6, April 2006. – both of Young's books were commissioned by the U.S. Department of Defense, Office of the Deputy Under Secretary of Defense (Installations and Environment) Further reading Books see pages 245–252. with a foreword by Howard Zinn. Government/NGO reports "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Ms. Tran Thi Hoan", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Dr. Nguyen Thi Ngoc Phuong", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 Agent Orange Policy, American Public Health Association, 2007 "Assessment of the health risk of dioxins", World Health Organization/International Programme on Chemical Safety, 1998 Operation Ranch Hand: Herbicides In Southeast Asia History of Operation Ranch Hand, 1983 "Agent Orange Dioxin Contamination in the Environment and Food Chain at Key Hotspots in Viet Nam" Boivin, TG, et al., 2011 News Fawthrop, Tom; Agent of suffering, Guardian, February 10, 2008 Cox, Paul; "The Legacy of Agent Orange is a Continuing Focus of VVAW", The Veteran, Vietnam Veterans Against the War, Volume 38, No. 2, Fall 2008. Barlett, Donald P. and Steele, James B.; "Monsanto's Harvest of Fear", Vanity Fair May 2008 Quick, Ben "The Boneyard" Orion Magazine, March/April 2008 Cheng, Eva; "Vietnam's Agent Orange victims call for solidarity", Green Left Weekly, September 28, 2005 Children and the Vietnam War 30–40 years after the use of Agent Orange Tokar, Brian; "Monsanto: A Checkered History", Z Magazine, March 1999 Video Agent Orange: The Last Battle. Dir. Stephanie Jobe, Adam Scholl. DVD. 2005 HADES. Dir. Caroline Delerue, screenplay by Mauro Bellanova, 2011 Short film by James Nguyen. Vietnam: The Secret Agent. Dir. Jacki Ochs, 1984 Photojournalism CNN Al Jazeera America External links U.S. Environmental Protection Agency – Dioxin Web site Agent Orange Office of Public Health and Environmental Hazards, U.S. Department of Veteran Affairs Report from the National Birth Defects Registry - Birth Defects in Vietnam Veterans' Children "An Ecological Assessment of Johnston Atoll" Aftermath of the Vietnam War Articles containing video clips Auxinic herbicides Carcinogens Defoliants Dioxins Environmental controversies Environmental impact of war Imperial Chemical Industries Malayan Emergency Medical controversies Military equipment of the Vietnam War Monsanto Operation Ranch Hand Teratogens United States war crimes
2560
https://en.wikipedia.org/wiki/Administrative%20law
Administrative law
Administrative law is a division of law governing the activities of executive branch agencies of government. Administrative law includes executive branch rule making (executive branch rules are generally referred to as "regulations"), adjudication, and the enforcement of laws. Administrative law is considered a branch of public law. Administrative law deals with the decision-making of such administrative units of government that are part of the executive branch in such areas as international trade, manufacturing, the environment, taxation, broadcasting, immigration, and transport. Administrative law expanded greatly during the 20th century, as legislative bodies worldwide created more government agencies to regulate the social, economic and political spheres of human interaction. Civil law countries often have specialized administrative courts that review these decisions. In the last fifty years, administrative law, in many countries of the civil law tradition, has opened itself to the influence of rules posed by supranational legal orders, in which judicial principles have a strong importance: it has led, for one, to changes in some traditional concepts of the administrative law model, as has happened with the public procurements or with judicial control of administrative activity and, for another, has built a supranational or international public administration, as in the environmental sector or with reference to education, for which, within the United Nations' system, it has been possible to assist to a further increase of administrative structure devoted to coordinate the States' activity in that sector. In civil law countries Unlike most common law jurisdictions, most civil law jurisdictions have specialized courts or sections to deal with administrative cases that as a rule apply procedural rules that are specifically designed for such cases and distinct from those applied in private law proceedings, such as contract or tort claims. Brazil In Brazil, administrative cases are typically heard either by the Federal Courts (in matters concerning the Federal Union) or by the Public Treasury divisions of State Courts (in matters concerning the States). In 1998 a constitutional reform led by the government of President Fernando Henrique Cardoso introduced regulatory agencies as a part of the executive branch. Since 1988, Brazilian administrative law has been strongly influenced by the judicial interpretations of the constitutional principles of public administration (Art. 37 of Federal Constitution): legality, impersonality, publicity of administrative acts, morality and efficiency. Chile In Chile the President of the Republic exercises the administrative function, in collaboration with several ministries or other authorities with ministerial rank. Each ministry has one or more under-secretaries that act through public service to meet public needs. There is no single specialized court to deal with actions against the administrative entities, but there are several specialized courts and procedures of review. China Administrative law in the China was virtually non-existent before the economic reform era initiated by Deng Xiaoping. Since the 1980s China has constructed a new legal framework for administrative law, establishing control mechanisms for overseeing the bureaucracy, and disciplinary committees for the Chinese Communist Party. However, many have argued that the usefulness of these laws is vastly inadequate in terms of controlling government actions, largely because of institutional and systemic obstacles like a weak judiciary, poorly trained judges and lawyers, and corruption. In 1990, the Administrative Supervision Regulations (行政检查条例) and the Administrative Reconsideration Regulations (行政复议条例) were passed. The 1993 State Civil Servant Provisional Regulations (国家公务员暂行条例) changed the way government officials were selected and promoted, requiring that they pass exams and yearly appraisals, and introducing a rotation system. The three regulations have been amended and upgraded into laws. In 1994, the State Compensation Law (国家赔偿法) was passed, followed by the Administrative Penalties Law (行政处罚法) in 1996. Administrative Compulsory Law was enforced in 2012. Administrative Litigation Law was amended in 2014. The General Administrative Procedure Law is underway. France In France, there is a dual jurisdictional system with the judiciary branch responsible for civil law and criminal law, and the administrative branch having jurisdiction when a government institution is involved. Most claims against the national or local governments as well as claims against private bodies providing public services are handled by administrative courts, which use the Conseil d'État (Council of State) as a court of last resort for both ordinary and special courts. The main administrative courts are the tribunaux administratifs and appeal courts are the cours administratives d'appel. Special administrative courts include the National Court of Asylum Right as well as military, medical and judicial disciplinary bodies. The French body of administrative law is called "droit administratif". Over the course of their history, France's administrative courts have developed an extensive and coherent case law (jurisprudence constante) and legal doctrine ( and ), often before similar concepts were enshrined in constitutional and legal texts. These principes include: Right to fair trial (droit à la défense), including for internal disciplinary bodies Right to challenge any administrative decision before an administrative court (droit au recours) Equal treatment of public service users (égalité devant le service public) Equal access to government employment (égalité d'accès à la fonction publique) without regard for political opinions Freedom of association (liberté d'association) Right to entrepreneurship (Liberté du Commerce et de l'industrie, lit. freedom of commerce and industry) Right to legal certainty (Droit à la sécurité juridique) French administrative law, the basis of continental administrative law, has had a strong influence on administrative laws in several other countries such as Belgium, Greece, Turkey and Tunisia. Germany In Germany administrative law is called "Verwaltungsrecht", which generally governs the relationship between authorities and citizens. It establishes citizens' rights and obligations. It is part of the public law, which deals with the organization, the tasks and the acting of the public administration. It also contains rules, regulations, orders and decisions created by and related to administrative agencies, such as federal agencies, federal state authorities, urban administrations, but also admissions offices and fiscal authorities etc. Administrative law in Germany follows three basic principles. Principle of the legality of the authority, which means that there is no acting against the law and no acting without a law. Principle of legal security, which includes a principle of legal certainty and the principle of non-retroactivity. Principle of proportionality, which means that an act of an authority has to be suitable, necessary and appropriate. Administrative law in Germany can be divided into general administrative law and special administrative law. General administrative law The general administration law is basically ruled in the administrative procedures law (Verwaltungsverfahrensgesetz [VwVfG]). Other legal sources are the Rules of the Administrative Courts (Verwaltungsgerichtsordnung [VwGO]), the social security code (Sozialgesetzbuch [SGB]) and the general fiscal law (Abgabenordnung [AO]). Administrative procedures law The Verwaltungsverfahrensgesetz (VwVfG) was enacted in 1977, where it regulates the main administrative procedures of the federal government. Its purpose is to ensure that the public authority's laws remain in order. The VwVfG contains the regulations for mass processes and gives legal power against the authorities. The VwVfG applies for public administrative activities of federal agencies as well as federal state authorities—in where federal laws are made. Paragraph 35 of the VwVfG defines the administrative act as the most common form of action in which the public administration occurs against a citizen. It states that an administration act is characterized by the following features: It is an official act of an authority in the field of public law to resolve an individual case with effect to the outside. Paragraph 36–39, 58–59, and 80 show the organization and structure of the administrative act. Paragraphs 48 and 49 state the prerequisites for redemption of an unlawful administration act (§ 48) and withdrawal of a lawful administration act (§ 49 VwVfG). Other legal sources Administration procedural law (Verwaltungsgerichtsordnung [VwGO]), which was enacted in 1960, rules the court procedures at the administrative court. The VwGO is divided into five parts, which are the constitution of the courts, action, remedies and retrial, costs and enforcement15 and final clauses and temporary arrangements. In absence of a rule, the VwGO is supplemented by the code of civil procedure (Zivilprozessordnung [ZPO]) and the judicature act (Gerichtsverfassungsgesetz [GVG]). In addition to the regulation of the administrative procedure, the VwVfG also constitutes the legal protection in administrative law beyond the court procedure. § 68 VwVGO rules the preliminary proceeding, called "Vorverfahren" or "Widerspruchsverfahren", which is a stringent prerequisite for the administrative procedure, if an action for rescission or a writ of mandamus against an authority is aimed. The preliminary proceeding gives each citizen, feeling unlawfully mistreated by an authority, the possibility to object and to force a review of an administrative act without going to court. The prerequisites to open the public law remedy are listed in § 40 I VwGO. Therefore, it is necessary to have the existence of a conflict in public law without any constitutional aspects and no assignment to another jurisdiction. The social security code (Sozialgesetzbuch [SGB]) and the general fiscal law are less important for the administrative law. They supplement the VwVfG and the VwGO in the fields of taxation and social legislation, such as social welfare or financial support for students (BaFÖG) etc. Special administrative law The special administrative law consists of various laws. Each special sector has its own law. The most important ones are the Town and Country Planning Code (Baugesetzbuch [BauGB]) Federal Control of Pollution Act (Bundesimmissionsschutzgesetz [BImSchG]) Industrial Code (Gewerbeordnung [GewO]) Police Law (Polizei- und Ordnungsrecht) Statute Governing Restaurants (Gaststättenrecht [GastG]). In Germany, the highest administrative court for most matters is the federal administrative court . There are federal courts with special jurisdiction in the fields of social security law () and tax law (). Italy In Italy administrative law is known as , a branch of public law whose rules govern the organization of the public administration and the activities of the pursuit of the public interest of the public administration and the relationship between this and the citizens. Its genesis is related to the principle of division of powers of the State. The administrative power, originally called "executive", is to organize resources and people whose function is devolved to achieve the public interest objectives as defined by the law. Netherlands In the Netherlands administrative law provisions are usually contained in the various laws about public services and regulations. There is however also a single General Administrative Law Act ( or Awb), which is a rather good sample of procedural laws in Europe. It applies both to the making of administrative decisions and the judicial review of these decisions in courts. Another act about judicial procedures in general is the (General time provisions act), with general provisions about time schedules in procedures. On the basis of the Awb, citizens can oppose a decision () made by an administrative agency () within the administration and apply for judicial review in courts if unsuccessful. Before going to court, citizens must usually first object to the decision with the administrative body who made it. This is called . This procedure allows for the administrative body to correct possible mistakes themselves and is used to filter cases before going to court. Sometimes, instead of , a different system is used called (administrative appeal). The difference with is that is filed with a different administrative body, usually a higher ranking one, than the administrative body that made the primary decision. is available only if the law on which the primary decision is based specifically provides for it. An example involves objecting to a traffic ticket with the district attorney (), after which the decision can be appealed in court. Unlike France or Germany, there are no special administrative courts of first instance in the Netherlands, but regular courts have an administrative "chamber" which specializes in administrative appeals. The courts of appeal in administrative cases however are specialized depending on the case, but most administrative appeals end up in the judicial section of the Council of State (Raad van State). Sweden In Sweden, there is a system of administrative courts that considers only administrative law cases, and is completely separate from the system of general courts. This system has three tiers, with 12 county administrative courts () as the first tier, four administrative courts of appeal () as the second tier, and the Supreme Administrative Court of Sweden () as the third tier. Migration cases are handled in a two-tier system, effectively within the system general administrative courts. Three of the administrative courts serve as migration courts () with the Administrative Court of Appeal in Stockholm serving as the Migration Court of Appeal (). Taiwan (ROC) In Taiwan the recently enacted Constitutional Procedure Act (憲法訴訟法) in 2019 (former Constitutional Interpretation Procedure Act, 1993), the Justices of the Constitutional Court of Judicial Yuan of Taiwan is in charge of judicial interpretation. As of 2019, this council has made 757 interpretations. Turkey In Turkey, the lawsuits against the acts and actions of the national or local governments and public bodies are handled by administrative courts which are the main administrative courts. The decisions of the administrative courts are checked by the Regional Administrative Courts and Council of State. Council of State as a court of last resort is exactly similar to Conseil d'État in France. Ukraine Administrative law in Ukraine is a homogeneous legal substance isolated in a system of jurisprudence characterized as: (1) a branch of law; (2) a science; (3) a discipline. In common law countries Generally speaking, most countries that follow the principles of common law have developed procedures for judicial review that limit the reviewability of decisions made by administrative law bodies. Often these procedures are coupled with legislation or other common law doctrines that establish standards for proper rulemaking. Administrative law may also apply to review of decisions of so-called semi-public bodies, such as non-profit corporations, disciplinary boards, and other decision-making bodies that affect the legal rights of members of a particular group or entity. While administrative decision-making bodies are often controlled by larger governmental units, their decisions could be reviewed by a court of general jurisdiction under some principle of judicial review based upon due process (United States) or fundamental justice (Canada). Judicial review of administrative decisions is different from an administrative appeal. When sitting in review of a decision, the Court will only look at the method in which the decision was arrived at, whereas in an administrative appeal the correctness of the decision itself will be examined, usually by a higher body in the agency. This difference is vital in appreciating administrative law in common law countries. The scope of judicial review may be limited to certain questions of fairness, or whether the administrative action is ultra vires. In terms of ultra vires actions in the broad sense, a reviewing court may set aside an administrative decision if it is unreasonable (under Canadian law, following the rejection of the "Patently Unreasonable" standard by the Supreme Court in Dunsmuir v New Brunswick), Wednesbury unreasonable (under British law), or arbitrary and capricious (under U.S. Administrative Procedure Act and New York State law). Administrative law, as laid down by the Supreme Court of India, has also recognized two more grounds of judicial review which were recognized but not applied by English Courts, namely legitimate expectation and proportionality. The powers to review administrative decisions are usually established by statute, but were originally developed from the royal prerogative writs of English law, such as the writ of mandamus and the writ of certiorari. In certain common law jurisdictions, such as India or Pakistan, the power to pass such writs is a Constitutionally guaranteed power. This power is seen as fundamental to the power of judicial review and an aspect of the independent judiciary. Australia Canada Singapore United Kingdom United States In the United States, many government agencies are organized under the executive branch of government, although a few are part of the judicial or legislative branches. In the federal government, the executive branch, led by the president, controls the federal executive departments, which are led by secretaries who are members of the United States Cabinet. The many independent agencies of the United States government created by statutes enacted by Congress exist outside of the federal executive departments but are still part of the executive branch. Congress has also created some special judicial bodies known as Article I tribunals to handle some areas of administrative law. The actions of executive agencies and independent agencies are the main focus of American administrative law. In response to the rapid creation of new independent agencies in the early twentieth century (see discussion below), Congress enacted the Administrative Procedure Act (APA) in 1946. Many of the independent agencies operate as miniature versions of the tripartite federal government, with the authority to "legislate" (through rulemaking; see Federal Register and Code of Federal Regulations), "adjudicate" (through administrative hearings), and to "execute" administrative goals (through agency enforcement personnel). Because the United States Constitution sets no limits on this tripartite authority of administrative agencies, Congress enacted the APA to establish fair administrative law procedures to comply with the constitutional requirements of due process. Agency procedures are drawn from four sources of authority: the APA, organic statutes, agency rules, and informal agency practice. It is important to note, though, that agencies can only act within their congressionally delegated authority, and must comply with the requirements of the APA. At state level the first version of the Model State Administrative Procedure Act was promulgated and published in 1946 by the Uniform Law Commission (ULC), in which year the Federal Administrative Procedure Act was drafted. It is incorporated basic principles with only enough elaboration of detail to support essential features, therefore it is a "model", and not a "uniform", act. A model act is needed because state administrative law in the states is not uniform, and there are a variety of approaches used in the various states. Later it was modified in 1961 and 1981. The present version is the 2010 Model State Administrative Procedure Act (MSAPA) which maintains the continuity with earlier ones. The reason of the revision is that, in the past two decades state legislatures, dissatisfied with agency rule-making and adjudication, have enacted statutes that modify administrative adjudication and rule-making procedure. The American Bar Association's official journal concerning administrative law is the Administrative Law Review, a quarterly publication that is managed and edited by students at the Washington College of Law. Historical development Stephen Breyer, a U.S. Supreme Court Justice from 1994 to 2022, divides the history of administrative law in the United States into six discrete periods, in his book, Administrative Law & Regulatory Policy (3d Ed., 1992): English antecedents & the American experience to 1875 1875 – 1930: the rise of regulation & the traditional model of administrative law 1930 – 1945: the New Deal 1945 – 1965: the Administrative Procedure Act & the maturation of the traditional model of administrative law 1965 – 1985: critique and transformation of the administrative process 1985 – ?: retreat or consolidation Agriculture The agricultural sector is one of the most heavily regulated sectors in the U.S. economy, as it is regulated in various ways at the international, federal, state, and local levels. Consequently, administrative law is a significant component of the discipline of agricultural law. The United States Department of Agriculture and its myriad agencies such as the Agricultural Marketing Service are the primary sources of regulatory activity, although other administrative bodies such as the Environmental Protection Agency play a significant regulatory role as well. See also Constitutionalism Rule of law Rechtsstaat References Further reading .
2581
https://en.wikipedia.org/wiki/Apache%20HTTP%20Server
Apache HTTP Server
The Apache HTTP Server ( ) is a free and open-source cross-platform web server software, released under the terms of Apache License 2.0. It is developed and maintained by a community of developers under the auspices of the Apache Software Foundation. The vast majority of Apache HTTP Server instances run on a Linux distribution, but current versions also run on Microsoft Windows, OpenVMS, and a wide variety of Unix-like systems. Past versions also ran on NetWare, OS/2 and other operating systems, including ports to mainframes. Originally based on the NCSA HTTPd server, development of Apache began in early 1995 after work on the NCSA code stalled. Apache played a key role in the initial growth of the World Wide Web, quickly overtaking NCSA HTTPd as the dominant HTTP server. In 2009, it became the first web server software to serve more than 100 million websites. , Netcraft estimated that Apache served 23.04% of the million busiest websites, while Nginx served 22.01%; Cloudflare at 19.53% and Microsoft Internet Information Services at 5.78% rounded out the top four. For some of Netcraft's other stats, Nginx is ahead of Apache. According to W3Techs' review of all web sites, in June 2022 Apache was ranked second at 31.4% and Nginx first at 33.6%, with Cloudflare Server third at 21.6%. Name According to The Apache Software Foundation, its name was chosen "from respect for the various Native American nations collectively referred to as Apache, well-known for their superior skills in warfare strategy and their inexhaustible endurance". This was in a context in which it seemed that the open internet -- based on free exchange of open source code -- appeared to be soon subjected to a kind of conquer by proprietary software vendor Microsoft; Apache co-creator Brian Behlendorf -- originator of the name -- saw his effort somewhat parallel that of Geronimo, Chief of the last of the free Apache peoples. But it conceded that the name "also makes a cute pun on 'a patchy web server'—a server made from a series of patches". There are other sources for the "patchy" software pun theory, including the project's official documentation in 1995, which stated: "Apache is a cute name which stuck. It was based on some existing code and a series of software patches, a pun on 'A PAtCHy' server." But in an April 2000 interview, Behlendorf asserted that the origins of Apache were not a pun, stating: In January 2023, the US-based non-profit Natives in Tech accused the Apache Software Foundation of cultural appropriation and urged them to change the foundation's name, and consequently also the names of the software projects it hosts. When Apache is running under Unix, its process name is , which is short for "HTTP daemon". Feature overview Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from authentication schemes to supporting server-side programming languages such as Perl, Python, Tcl and PHP. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest, the successor to mod_digest. A sample of other features include Secure Sockets Layer and Transport Layer Security support (mod_ssl), a proxy module (mod_proxy), a URL rewriting module (mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter). Popular compression methods on Apache include the external extension module, mod_gzip, implemented to help with reduction of the size (weight) of web pages served over HTTP. ModSecurity is an open source intrusion detection and prevention engine for Web applications. Apache logs can be analyzed through a Web browser using free scripts, such as AWStats/W3Perl or Visitors. Virtual hosting allows one Apache installation to serve many different websites. For example, one computer with one Apache installation could simultaneously serve example.com, example.org, test47.test-server.example.edu, etc. Apache features configurable error messages, DBMS-based authentication databases, content negotiation and supports several graphical user interfaces (GUIs). It supports password authentication and digital certificate authentication. Because the source code is freely available, anyone can adapt the server for specific needs, and there is a large public library of Apache add-ons. A more detailed list of features is provided below: Loadable Dynamic Modules Multiple Request Processing modes (MPMs) including Event-based/Async, Threaded and Prefork. Highly scalable (easily handles more than 10,000 simultaneous connections) Handling of static files, index files, auto-indexing and content negotiation .htaccess per-directory configuration support Reverse proxy with caching Load balancing with in-band health checks Multiple load balancing mechanisms Fault tolerance and Failover with automatic recovery WebSocket, FastCGI, SCGI, AJP and uWSGI support with caching Dynamic configuration TLS/SSL with SNI and OCSP stapling support, via OpenSSL or wolfSSL. Name- and IP address-based virtual servers IPv6-compatible HTTP/2 support Fine-grained authentication and authorization access control gzip compression and decompression URL rewriting Headers and content rewriting Custom logging with rotation Concurrent connection limiting Request processing rate limiting Bandwidth throttling Server Side Includes IP address-based geolocation User and Session tracking WebDAV Embedded Perl, PHP and Lua scripting CGI support public_html per-user web-pages Generic expression parser Real-time status views FTP support (by a separate module) Performance Instead of implementing a single architecture, Apache provides a variety of MultiProcessing Modules (MPMs), which allow it to run in either a process-based mode, a hybrid (process and thread) mode, or an event-hybrid mode, in order to better match the demands of each particular infrastructure. Choice of MPM and configuration is therefore important. Where compromises in performance must be made, Apache is designed to reduce latency and increase throughput relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames. For delivering static pages, Apache 2.2 series was considered significantly slower than nginx and varnish. To address this issue, the Apache developers created the Event MPM, which mixes the use of several processes and several threads per process in an asynchronous event-based loop. This architecture as implemented in the Apache 2.4 series performs at least as well as event-based web servers, according to Jim Jagielski and other independent sources. However, some independent but significantly outdated benchmarks show that it is still half as fast as nginx, e.g. Licensing The Apache HTTP Server codebase was relicensed to the Apache 2.0 License (from the previous 1.1 license) in January 2004, and Apache HTTP Server 1.3.31 and 2.0.49 were the first releases using the new license. The OpenBSD project did not like the change and continued the use of pre-2.0 Apache versions, effectively forking Apache 1.3.x for its purposes. They initially replaced it with Nginx, and soon after made their own replacement, OpenBSD Httpd, based on the Relayd project. Versions Version 1.1: The Apache License 1.1 was approved by the ASF in 2000: The primary change from the 1.0 license is in the 'advertising clause' (section 3 of the 1.0 license); derived products are no longer required to include attribution in their advertising materials, only in their documentation. Version 2.0: The ASF adopted the Apache License 2.0 in January 2004. The stated goals of the license included making the license easier for non-ASF projects to use, improving compatibility with GPL-based software, allowing the license to be included by reference instead of listed in every file, clarifying the license on contributions, and requiring a patent license on contributions that necessarily infringe a contributor's own patents. Development The Apache HTTP Server Project is a collaborative software development effort aimed at creating a robust, commercial-grade, feature-rich and freely available source code implementation of an HTTP (Web) server. The project is jointly managed by a group of volunteers located around the world, using the Internet and the Web to communicate, plan, and develop the server and its related documentation. This project is part of the Apache Software Foundation. In addition, hundreds of users have contributed ideas, code, and documentation to the project. Apache 2.4 dropped support for BeOS, TPF, A/UX, NeXT, and Tandem platforms. Security Apache, like other server software, can be hacked and exploited. The main Apache attack tool is Slowloris, which exploits a bug in Apache software. It creates many sockets and keeps each of them alive and busy by sending several bytes (known as "keep-alive headers") to let the server know that the computer is still connected and not experiencing network problems. The Apache developers have addressed Slowloris with several modules to limit the damage caused; the Apache modules mod_limitipconn, mod_qos, mod_evasive, mod security, mod_noloris, and mod_antiloris have all been suggested as means of reducing the likelihood of a successful Slowloris attack. Since Apache 2.2.15, Apache ships the module mod_reqtimeout as the official solution supported by the developers. See also .htaccess .htpasswd ApacheBench Comparison of web server software IBM HTTP Server LAMP (software bundle) XAMPP List of Apache modules List of free and open-source software packages POSSE project suEXEC Apache Tomcat - another web server developed by the Apache Software Foundation References External links 1995 software HTTP Server Cross-platform free software Free software programmed in C Free web server software Reverse proxy Software using the Apache license Unix network-related software Web server software for Linux Web server software
2593
https://en.wikipedia.org/wiki/Accounting
Accounting
Accounting, also known as accountancy, is the processing of information about economic entities, such as businesses and corporations. Accounting measures the results of an organization's economic activities and conveys this information to a variety of stakeholders, including investors, creditors, management, and regulators. Practitioners of accounting are known as accountants. The terms "accounting" and "financial reporting" are often used as synonyms. Accounting can be divided into several fields including financial accounting, management accounting, tax accounting and cost accounting. Financial accounting focuses on the reporting of an organization's financial information, including the preparation of financial statements, to the external users of the information, such as investors, regulators and suppliers. Management accounting focuses on the measurement, analysis and reporting of information for internal use by management. The recording of financial transactions, so that summaries of the financials may be presented in financial reports, is known as bookkeeping, of which double-entry bookkeeping is the most common system. Accounting information systems are designed to support accounting functions and related activities. Accounting has existed in various forms and levels of sophistication throughout human history. The double-entry accounting system in use today was developed in medieval Europe, particularly in Venice, and is usually attributed to the Italian mathematician and Franciscan friar Luca Pacioli. Today, accounting is facilitated by accounting organizations such as standard-setters, accounting firms and professional bodies. Financial statements are usually audited by accounting firms, and are prepared in accordance with generally accepted accounting principles (GAAP). GAAP is set by various standard-setting organizations such as the Financial Accounting Standards Board (FASB) in the United States and the Financial Reporting Council in the United Kingdom. As of 2012, "all major economies" have plans to converge towards or adopt the International Financial Reporting Standards (IFRS). History Accounting is thousands of years old and can be traced to ancient civilizations. One early development of accounting dates back to ancient Mesopotamia, and is closely related to developments in writing, counting and money; there is also evidence of early forms of bookkeeping in ancient Iran, and early auditing systems by the ancient Egyptians and Babylonians. By the time of Emperor Augustus, the Roman government had access to detailed financial information. Double-entry bookkeeping was pioneered in the Jewish community of the early-medieval Middle East and was further refined in medieval Europe. With the development of joint-stock companies, accounting split into financial accounting and management accounting. The first published work on a double-entry bookkeeping system was the Summa de arithmetica, published in Italy in 1494 by Luca Pacioli (the "Father of Accounting"). Accounting began to transition into an organized profession in the nineteenth century, with local professional bodies in England merging to form the Institute of Chartered Accountants in England and Wales in 1880. Etymology Both the words accounting and accountancy were in use in Great Britain by the mid-1800s, and are derived from the words accompting and accountantship used in the 18th century. In Middle English (used roughly between the 12th and the late 15th century) the verb "to account" had the form accounten, which was derived from the Old French word aconter, which is in turn related to the Vulgar Latin word computare, meaning "to reckon". The base of computare is putare, which "variously meant to prune, to purify, to correct an account, hence, to count or calculate, as well as to think". The word "accountant" is derived from the French word , which is also derived from the Italian and Latin word . The word was formerly written in English as "accomptant", but in process of time the word, which was always pronounced by dropping the "p", became gradually changed both in pronunciation and in orthography to its present form. Terminology Accounting has variously been defined as the keeping or preparation of the financial records of transactions of the firm, the analysis, verification and reporting of such records and "the principles and procedures of accounting"; it also refers to the job of being an accountant. Accountancy refers to the occupation or profession of an accountant, particularly in British English. Topics Accounting has several subfields or subject areas, including financial accounting, management accounting, auditing, taxation and accounting information systems. Financial accounting Financial accounting focuses on the reporting of an organization's financial information to external users of the information, such as investors, potential investors and creditors. It calculates and records business transactions and prepares financial statements for the external users in accordance with generally accepted accounting principles (GAAP). GAAP, in turn, arises from the wide agreement between accounting theory and practice, and change over time to meet the needs of decision-makers. Financial accounting produces past-oriented reports—for example financial statements are often published six to ten months after the end of the accounting period—on an annual or quarterly basis, generally about the organization as a whole. Management accounting Management accounting focuses on the measurement, analysis and reporting of information that can help managers in making decisions to fulfill the goals of an organization. In management accounting, internal measures and reports are based on cost-benefit analysis, and are not required to follow the generally accepted accounting principle (GAAP). In 2014 CIMA created the Global Management Accounting Principles (GMAPs). The result of research from across 20 countries in five continents, the principles aim to guide best practice in the discipline. Management accounting produces past-oriented reports with time spans that vary widely, but it also encompasses future-oriented reports such as budgets. Management accounting reports often include financial and non financial information, and may, for example, focus on specific products and departments. Auditing Auditing is the verification of assertions made by others regarding a payoff, and in the context of accounting it is the "unbiased examination and evaluation of the financial statements of an organization". Audit is a professional service that is systematic and conventional. An audit of financial statements aims to express or disclaim an independent opinion on the financial statements. The auditor expresses an independent opinion on the fairness with which the financial statements presents the financial position, results of operations, and cash flows of an entity, in accordance with the generally accepted accounting principles (GAAP) and "in all material respects". An auditor is also required to identify circumstances in which the generally accepted accounting principles (GAAP) have not been consistently observed. Information systems An accounting information system is a part of an organization's information system used for processing accounting data. Many corporations use artificial intelligence-based information systems. The banking and finance industry uses AI in fraud detection. The retail industry uses AI for customer services. AI is also used in the cybersecurity industry. It involves computer hardware and software systems using statistics and modeling. Many accounting practices have been simplified with the help of accounting computer-based software. An enterprise resource planning (ERP) system is commonly used for a large organisation and it provides a comprehensive, centralized, integrated source of information that companies can use to manage all major business processes, from purchasing to manufacturing to human resources. These systems can be cloud based and available on demand via application or browser, or available as software installed on specific computers or local servers, often referred to as on-premise. Tax accounting Tax accounting in the United States concentrates on the preparation, analysis and presentation of tax payments and tax returns. The U.S. tax system requires the use of specialised accounting principles for tax purposes which can differ from the generally accepted accounting principles (GAAP) for financial reporting. U.S. tax law covers four basic forms of business ownership: sole proprietorship, partnership, corporation, and limited liability company. Corporate and personal income are taxed at different rates, both varying according to income levels and including varying marginal rates (taxed on each additional dollar of income) and average rates (set as a percentage of overall income). Forensic accounting Forensic accounting is a specialty practice area of accounting that describes engagements that result from actual or anticipated disputes or litigation. "Forensic" means "suitable for use in a court of law", and it is to that standard and potential outcome that forensic accountants generally have to work. Political campaign accounting Political campaign accounting deals with the development and implementation of financial systems and the accounting of financial transactions in compliance with laws governing political campaign operations. This branch of accounting was first formally introduced in the March 1976 issue of The Journal of Accountancy. Organizations Professional bodies Professional accounting bodies include the American Institute of Certified Public Accountants (AICPA) and the other 179 members of the International Federation of Accountants (IFAC), including Institute of Chartered Accountants of Scotland (ICAS), Institute of Chartered Accountants of Pakistan (ICAP), CPA Australia, Institute of Chartered Accountants of India, Association of Chartered Certified Accountants (ACCA) and Institute of Chartered Accountants in England and Wales (ICAEW). Some countries have a single professional accounting body and, in some other countries, professional bodies for subfields of the accounting professions also exist, for example the Chartered Institute of Management Accountants (CIMA) in the UK and Institute of management accountants in the United States. Many of these professional bodies offer education and training including qualification and administration for various accounting designations, such as certified public accountant (AICPA) and chartered accountant. Firms Depending on its size, a company may be legally required to have their financial statements audited by a qualified auditor, and audits are usually carried out by accounting firms. Accounting firms grew in the United States and Europe in the late nineteenth and early twentieth century, and through several mergers there were large international accounting firms by the mid-twentieth century. Further large mergers in the late twentieth century led to the dominance of the auditing market by the "Big Five" accounting firms: Arthur Andersen, Deloitte, Ernst & Young, KPMG and PricewaterhouseCoopers. The demise of Arthur Andersen following the Enron scandal reduced the Big Five to the Big Four. Standard-setters Generally accepted accounting principles (GAAP) are accounting standards issued by national regulatory bodies. In addition, the International Accounting Standards Board (IASB) issues the International Financial Reporting Standards (IFRS) implemented by 147 countries. Standards for international audit and assurance, ethics, education, and public sector accounting are all set by independent standard settings boards supported by IFAC. The International Auditing and Assurance Standards Board sets international standards for auditing, assurance, and quality control; the International Ethics Standards Board for Accountants (IESBA) sets the internationally appropriate principles-based Code of Ethics for Professional Accountants; the International Accounting Education Standards Board (IAESB) sets professional accounting education standards; and International Public Sector Accounting Standards Board (IPSASB) sets accrual-based international public sector accounting standards. Organizations in individual countries may issue accounting standards unique to the countries. For example, in Australia, the Australian Accounting Standards Board manages the issuance of the accounting standards in line with IFRS. In the United States the Financial Accounting Standards Board (FASB) issues the Statements of Financial Accounting Standards, which form the basis of US GAAP, and in the United Kingdom the Financial Reporting Council (FRC) sets accounting standards. However, as of 2012 "all major economies" have plans to converge towards or adopt the IFRS. Education, training and qualifications Degrees At least a bachelor's degree in accounting or a related field is required for most accountant and auditor job positions, and some employers prefer applicants with a master's degree. A degree in accounting may also be required for, or may be used to fulfill the requirements for, membership to professional accounting bodies. For example, the education during an accounting degree can be used to fulfill the American Institute of CPA's (AICPA) 150 semester hour requirement, and associate membership with the Certified Public Accountants Association of the UK is available after gaining a degree in finance or accounting. A doctorate is required in order to pursue a career in accounting academia, for example, to work as a university professor in accounting. The Doctor of Philosophy (PhD) and the Doctor of Business Administration (DBA) are the most popular degrees. The PhD is the most common degree for those wishing to pursue a career in academia, while DBA programs generally focus on equipping business executives for business or public careers requiring research skills and qualifications. Professional qualifications Professional accounting qualifications include the chartered accountant designations and other qualifications including certificates and diplomas. In Scotland, chartered accountants of ICAS undergo Continuous Professional Development and abide by the ICAS code of ethics. In England and Wales, chartered accountants of the ICAEW undergo annual training, and are bound by the ICAEW's code of ethics and subject to its disciplinary procedures. In the United States, the requirements for joining the AICPA as a Certified Public Accountant are set by the Board of Accountancy of each state, and members agree to abide by the AICPA's Code of Professional Conduct and Bylaws. The ACCA is the largest global accountancy body with over 320,000 members, and the organisation provides an 'IFRS stream' and a 'UK stream'. Students must pass a total of 14 exams, which are arranged across three levels. Research Accounting research is research in the effects of economic events on the process of accounting, the effects of reported information on economic events, and the roles of accounting in organizations and society. It encompasses a broad range of research areas including financial accounting, management accounting, auditing and taxation. Accounting research is carried out both by academic researchers and practicing accountants. Methodologies in academic accounting research include archival research, which examines "objective data collected from repositories"; experimental research, which examines data "the researcher gathered by administering treatments to subjects"; analytical research, which is "based on the act of formally modeling theories or substantiating ideas in mathematical terms"; interpretive research, which emphasizes the role of language, interpretation and understanding in accounting practice, "highlighting the symbolic structures and taken-for-granted themes which pattern the world in distinct ways"; critical research, which emphasizes the role of power and conflict in accounting practice; case studies; computer simulation; and field research. Empirical studies document that leading accounting journals publish in total fewer research articles than comparable journals in economics and other business disciplines, and consequently, accounting scholars are relatively less successful in academic publishing than their business school peers. Due to different publication rates between accounting and other business disciplines, a recent study based on academic author rankings concludes that the competitive value of a single publication in a top-ranked journal is highest in accounting and lowest in marketing. Scandals The year 2001 witnessed a series of financial information frauds involving Enron, auditing firm Arthur Andersen, the telecommunications company WorldCom, Qwest and Sunbeam, among other well-known corporations. These problems highlighted the need to review the effectiveness of accounting standards, auditing regulations and corporate governance principles. In some cases, management manipulated the figures shown in financial reports to indicate a better economic performance. In others, tax and regulatory incentives encouraged over-leveraging of companies and decisions to bear extraordinary and unjustified risk. The Enron scandal deeply influenced the development of new regulations to improve the reliability of financial reporting, and increased public awareness about the importance of having accounting standards that show the financial reality of companies and the objectivity and independence of auditing firms. In addition to being the largest bankruptcy reorganization in American history, the Enron scandal undoubtedly is the biggest audit failure causing the dissolution of Arthur Andersen, which at the time was one of the five largest accounting firms in the world. After a series of revelations involving irregular accounting procedures conducted throughout the 1990s, Enron filed for Chapter 11 bankruptcy protection in December 2001. One consequence of these events was the passage of the Sarbanes–Oxley Act in the United States in 2002, as a result of the first admissions of fraudulent behavior made by Enron. The act significantly raises criminal penalties for securities fraud, for destroying, altering or fabricating records in federal investigations or any scheme or attempt to defraud shareholders. Fraud and error Accounting fraud is an intentional misstatement or omission in the accounting records by management or employees which involves the use of deception. It is a criminal act and a breach of civil tort. It may involve collusion with third parties. An accounting error is an unintentional misstatement or omission in the accounting records, for example misinterpretation of facts, mistakes in processing data, or oversights leading to incorrect estimates. Acts leading to accounting errors are not criminal but may breach civil law, for example, the tort of negligence. The primary responsibility for the prevention and detection of fraud and errors rests with the entity's management. See also Accounting information system Accounting records References External links Operations Research in Accounting on the Institute for Operations Research and the Management Sciences website Administrative theory fi:Laskentatoimi
2597
https://en.wikipedia.org/wiki/Arbitration%20in%20the%20United%20States
Arbitration in the United States
Arbitration, in the context of the law of the United States, is a form of alternative dispute resolution. Specifically, arbitration is an alternative to litigation through which the parties to a dispute agree to submit their respective evidence and legal arguments to a neutral third party (the arbitrator(s) or arbiter(s)) for resolution. In practice arbitration is generally used as a substitute for litigation, particularly when the judicial process is perceived as too slow, expensive or biased. In some contexts, an arbitrator may be described as an umpire. Arbitration in the United States' most overarching clause is the Federal Arbitration Act (officially the United States Arbitration Act of 1925, commonly referred to as the FAA). The Act stipulates that arbitration in a majority of instances is legal when both parties, either after or prior to the arising of a dispute, agree to the arbitration. The Supreme Court has taken a pro-arbitration stance across most but not all cases, although the federal government, most recently in 2022, has passed certain exemptions to arbitration agreements. States are also generally prohibited from passing their own laws which the Supreme Court and other federal courts believe limit or discriminate against arbitration. The practice of arbitration, especially "forced" arbitration clauses between workers/consumers and large companies or organizations, has been gaining a growing amount of scrutiny from both the general public and trial lawyers. Arbitration clauses face various challenges to enforcement, and clauses are unenforceable in the United States when a dispute which falls under the scope of an arbitration clause pertains to sexual harassment or assault. History Agreements to arbitrate were not enforceable at common law. This rule has been traced back to dictum by Lord Coke in Vynor’s Case, 8 Co. Rep. 81b, 77 Eng. Rep. 597 (1609), that agreements to arbitrate were revocable by either party. During the Industrial Revolution, merchants became increasingly opposed to this rule. They argued that too many valuable business relationships were being destroyed through years of expensive adversarial litigation, in courts whose rules differed significantly from the informal norms and conventions of businesspeople. Arbitration was promoted as being faster, less adversarial, and cheaper. The result was the New York Arbitration Act of 1920, followed by the United States Arbitration Act of 1925 (now known as the Federal Arbitration Act). Both made agreements to arbitrate valid and enforceable (unless one party could show fraud or unconscionability or some other ground for rescission which undermined the validity of the entire contract). Due to the subsequent judicial expansion of the meaning of interstate commerce, the Supreme Court reinterpreted the FAA in a series of cases in the 1980s and 1990s to cover almost the full scope of interstate commerce. In the process, the Court held that the FAA preempted many state laws covering arbitration, some of which had been passed by state legislatures to protect their workers and consumers against powerful business interests. Starting in 1991 with the Gilmer decision arbitration expanded dramatically in the employment context, growing from 2.1 percent of employees subject to mandatory arbitration clauses in 1992 to 53.9% in 2017. Types of Arbitration Commercial and other forms of contract arbitration Since commercial arbitration is based upon either contract law or the law of treaties, the agreement between the parties to submit their dispute to arbitration is a legally binding contract. All arbitral decisions are considered to be "final and binding". This does not, however, void the requirements of law. Any dispute not excluded from arbitration by virtue of law (for example, criminal proceedings) may be submitted to arbitration. Furthermore, arbitration agreements can only bind parties who have agreed, expressly or impliedly, to arbitrate, and parties cannot be required to submit to an arbitration process if they have not previously "agreed so to submit". It is only through the advance agreement of the parties that the arbitrator derives [any] authority to resolve disputes. Arbitration cannot bind non-signatories to an arbitration contract, even if those non-signatories later become involved with a signatory to a contract by accident (usually through the commission of a tort). However, third-party non-signatories can be bound by arbitration agreements based on theories of estoppel, agency relationships with a party, assumption of the contract containing the arbitration agreement, third-party beneficiary status under the contract, or piercing the corporate veil. The question of whether two parties have actually agreed to arbitrate any disputes is one for judicial determination, because if the parties have not agreed to arbitrate then the arbitrator would have no authority. Where there is an arbitration agreement, doubts concerning "the scope of arbitrable issues should be resolved in favor of arbitration", but issues regarding whether a claim falls within the scope of arbitrable issues is a judicial matter, unless the parties have expressly agreed that the arbitrator may decide the scope of his or her own authority. Most courts hold that general arbitration clauses, such as an agreement to refer to arbitration any dispute "arising from" or "related to" a particular contract, do not authorize an arbitrator to determine whether a particular issue arises from or relates to the contract concerned. A minority view embraced by some courts is that this broad language can evidence the parties' clear and unmistakable intention to delegate the resolution of all issues to the arbitrator, including issues regarding arbitrability. Labor arbitration Arbitration may be used as a means of resolving labor disputes, an alternative to strikes and lockouts. Labor arbitration comes in two varieties: interest arbitration, which provides a method for resolving disputes about the terms to be included in a new contract when the parties are unable to agree, and grievance arbitration, which provides a method for resolving disputes over the interpretation and application of a collective bargaining agreement. Arbitration has also been used as a means of resolving labor disputes for more than a century. Labor organizations in the United States, such as the National Labor Union, called for arbitration as early as 1866 as an alternative to strikes to resolve disputes over the wages, benefits and other rights that workers would enjoy. Interest arbitration Governments have relied on arbitration to resolve particularly large labor disputes, such as the Coal Strike of 1902. This type of arbitration, wherein a neutral arbitrator decides the terms of the collective bargaining agreement, is commonly known as interest arbitration. The United Steelworkers of America adopted an elaborate form of interest arbitration, known as the Experimental Negotiating Agreement, in the 1970s as a means of avoiding the long and costly strikes that had made the industry vulnerable to foreign competition. Major League Baseball uses a variant of interest arbitration, in which an arbitrator chooses between the two sides' final offers, to set the terms for contracts for players who are not eligible for free agency. Interest arbitration is now most frequently used by public employees who have no right to strike (for example, law enforcement and firefighters). Grievance arbitration Unions and employers have also employed arbitration to resolve employee and union grievances arising under a collective bargaining agreement. The Amalgamated Clothing Workers of America made arbitration a central element of the Protocol of Peace it negotiated with garment manufacturers in the second decade of the twentieth century. Grievance arbitration became even more popular during World War II, when most unions had adopted a no-strike pledge. The War Labor Board, which attempted to mediate disputes over contract terms, pressed for inclusion of grievance arbitration in collective bargaining agreements. The Supreme Court subsequently made labor arbitration a key aspect of federal labor policy in three cases which came to be known as the Steelworkers' Trilogy. The Court held that grievance arbitration was a preferred dispute resolution technique and that courts could not overturn arbitrators' awards unless the award does not draw its essence from the collective bargaining agreement. State and federal statutes may allow vacating an award on narrow grounds (e.g., fraud). These protections for arbitrator awards are premised on the union-management system, which provides both parties with due process. Due process in this context means that both parties have experienced representation throughout the process, and that the arbitrators practice only as neutrals. See National Academy of Arbitrators. Securities arbitration In the United States securities industry, arbitration has long been the preferred method of resolving disputes between brokerage firms, and between firms and their customers. The arbitration process operates under its own rules, as defined by contract. Securities arbitrations are held primarily by the Financial Industry Regulatory Authority. The securities industry uses pre-dispute arbitration agreements, through which the parties agree to arbitrate their disputes before any such dispute arises. Those agreements were upheld by the United States Supreme Court in Shearson v. MacMahon, 482 U.S. 220 (1987) and today nearly all disputes involving brokerage firms, other than Securities class action claims, are resolved in arbitration. The SEC has come under fire from members of the Senate Judiciary Committee for not fulfilling statutory duty to protect individual investors, because all brokers require arbitration, and arbitration does not provide a court-supervised discovery process, require arbitrators to follow rules of evidence or result in written opinions establishing precedence, or case law, or provide the efficiency gains it once did. Arbitrator selection bias, hidden conflicts of interest, and a case where an arbitration panel refused to follow instructions handed down from a judge, were also raised as issues. Judicial arbitration Some state court systems have promulgated court-ordered arbitration; family law (particularly child custody) is the most prominent example. Judicial arbitration is often merely advisory dispute resolution technique, serving as the first step toward resolution, but not binding either side and allowing for trial de novo. Litigation attorneys present their side of the case to an independent tertiary lawyer, who issues an opinion on settlement. Should the parties in question decide to continue to dispute resolution process, there can be some sanctions imposed from the initial arbitration per terms of the contract. Arbitration clauses The federal government has expressed a policy in support of arbitration clauses, because they reduce the burden on court systems to resolve disputes. This support is found in the Federal Arbitration Act, (FAA) which permits compulsory and binding arbitration, under which parties give up the right to appeal an arbitrator's decision to a court. In Prima Paint Corp. v. Flood & Conklin Mfg. Co., the U.S. Supreme Court established the "separability principle", under which enforceability of a contract must be challenged in arbitration before any court action, unless the arbitration clause itself has been challenged. Today, mandatory arbitration clauses are widespread in the United States, including by 15 of the largest 20 U.S. credit card issuers, 7 of the 8 largest cell phone companies, and 2 out of 3 major bike sharing companies in Seattle. Arbitration clauses can be enforceable if "signed" electronically, though California courts have stated that a handwritten signature to an arbitration agreement is easier to enforce than one done electronically. The FAA has also been interpreted to preempt and invalidate state laws which prevent or discriminate against the enforcement of arbitration agreements. In one such case in 2023, which overruled California Assembly Bill 51, the Ninth Circuit Court of Appeals found that California's bill placed restrictions on the "broad national policy" favoring arbitration agreements. Similar fates have been bestowed upon legislation in New Jersey, New York, and Washington state which attempted to reduce the scope of arbitration clauses. In insurance law, arbitration is complicated by the fact that insurance is regulated at the state level under the McCarran–Ferguson Act. From a federal perspective, however, a circuit court ruling has determined that McCarran-Ferguson requires a state statute rather than administrative interpretations. The Missouri Department of Insurance attempted to block a binding arbitration agreement under its state authority, but since this action was based only on a policy of the department and not on a state statute, the United States district court found that the Department of Insurance did not have the authority to invalidate the arbitration agreement. In AT&T Mobility v. Concepcion (2011), the Supreme Court upheld an arbitration clause in a consumer standard form contract which waived the right to a lawsuit and class action. However, this clause was relatively generous in that the business paid all fees unless the action was determined to be frivolous and a small-claims court action remained available; these types of protections are recommended for the contract to remain enforceable and not unconscionable. The Supreme Court has also ruled that questions on whether an arbitration clause should be enforced at all permits litigation involving the rest of the case to be stayed. In 2023's Coinbase v. Bielski, the court ruled that federal district courts must stay proceedings involving a case during an arbitration appeal on such case. Arbitration clauses can also be written in a manner which excludes certain disputes from being required to be sent to arbitration. Motions to compel arbitration involving excluded disputes then on would not be honored, as seen in a 2023 ruling made by the Ninth Circuit via one of its judicial panels. In such ruling, the casino firm Saipan included an arbitration agreement which exempted licensing claims from being subject to mandatory arbitration. Opt out provisions Some arbitration clauses in the United States offer opportunities for parties to opt out of the arbitration agreement and not be subject to it. Many companies utilize opt out clauses within their arbitration agreements, most often giving 30 or 60 days for consumers in contracts between consumers and companies to either send a rejection notice by mail or by email. Including an opt out provision has been found to improve the likelihood of a contract to be found conscionable. In Hopkins v. World Acceptance Corp, a case cited in Ferrara v. Luxottica, failure to opt out of an arbitration agreement dilutes the ability to combat a motion to compel arbitration. Many credit card companies which have arbitration agreements allow card signers to opt out, although company procedures may make it difficult for consumers to exercise that option. Prohibitions on arbitration Challenges to clause enforcement Determination of validity Although properly drafted arbitration clauses are generally valid, they are subject to challenge in court for compliance with laws and public policy. Arbitration clauses may potentially be challenged as unconscionable and, therefore, unenforceable. Typically, the validity of an arbitration clause is decided by a court rather than an arbitrator. However, if the validity of the entire arbitration agreement is in dispute, then the issue is decided by the arbitrators in the first instance. This is known as the principle of separability. For example, in Rent-A-Center, West, Inc. v. Jackson, the Supreme Court of the United States held that "under the FAA, where an agreement to arbitrate includes an agreement that the arbitrator will determine the enforceability of the agreement, if a party challenges specifically the enforceability of that particular agreement, the district court considers the challenge, but if a party challenges the enforceability of the agreement as a whole, the challenge is for the arbitrator." In other words, the law typically allows federal courts to decide these types of "gateway" or validity questions, but the Supreme Court ruled that since Jackson targeted the entire contract rather than a specific clause, the arbitrator decided the validity. Public Citizen, an advocacy organization opposed to the enforcement of pre-dispute arbitration agreements, characterized the decision negatively: "the court said that companies can write their contracts so that the companies' own arbitrator decides whether it's fair to submit a case to that arbitrator." Arbitration clauses must also further provide a clear procedure, and confusion and/or ambiguity in an arbitration clause can also cause such clause to be struck down. One example of this phenomenon occurred in a lawsuit against SoLo Funds, where a Philadelphia federal judge ruled that because the app did not make clear its arbitration requirements, the clause was unconscionable and SoLo's bid to compel arbitration was not granted. Ambiguity-related nullifications of arbitration agreements further extend to proof of agreement between the parties, as in Romano v. BCBSM, Blue Cross Blue Shield of Michigan failed to compel arbitration against a former employee in June 2023 after US district judge George Caram Steeh III ruled that the online application process failed to adequately provide the employee notice of the arbitration agreement he would otherwise be bound to. Modification of the arbitration clause A significant challenge to arbitrate agreements arose out of South Carolina through the case Hooters v. Phillips. In the 1999 case, a federal district court found that Hooters modified its dispute resolution rules in 1996 to be unfair enough that the court held that the agreement was unconscionable, partly due to Hooters requiring that all of the arbitrators in dispute resolution cases be selected from a list pre-approved by the company, which included Hooters managers. In April of 2022, the Court of Appeals for the Fourth Circuit found that in Coady v. Nationwide Motor Sales, because Nationwide Motor Sales' contract enabled them to be the sole party permitted to modify the contract that Coady signed. Citing Hooters v. Phillips, the court expressed when an employer has the ability “in whole or in part” to modify the arbitration provision without notice to its employees. California's Court of Appeal reached a similar conclusion in Peleg v. Neiman Marcus, in which a unilateral modification to an arbitration agreement invalidated the clause. Another instance of modified arbitration clauses causing it to be overturned was found in a privacy-related dispute between Amazon and its drivers who work under the company's Amazon Flex service. Amazon Flex drivers, who filed a class action lawsuit claiming that the company spied on private Facebook conversations, alleged that the updated 2019 terms related to Amazon Flex were not delivered properly to them, and that the 2016 terms, which did not include an arbitration clause, should apply. Ultimately, the Ninth Circuit decided that since Amazon was the party compelling arbitration, the burden of proof was on Amazon to prove that its flex drivers received notice of the 2019 updated terms, and that arbitration should not be compelled. Waiving the right to arbitrate Some courts have found that parties can waive their right to compel arbitration through various forms of actions. In California, as demonstrated by Davis v. Shiekh Shoes and Espinoza v. Superior Court, a party wishing to compel arbitration though failing to pay arbitration fees in a timely manner waives their right to compel arbitration, and must resolve the dispute in court. More importantly, the Supreme Court found in Morgan v. Sundance that a party which does not compel arbitration when a valid clause exists waives its right to compel arbitration. Justice Elena Kagan, writing for the court's unanimous ruling in favor of hourly Taco Bell employee Robyn Morgan, found that the Eighth Circuit created "special rules" in which Morgan was compelled to arbitrate based on Sundance's prejudice (delay) of compelling arbitration. The opinion on a party waiving its right to compel arbitration if it had litigated extensively prior to the motion has been further confirmed in light of Davis and Espinoza when one of Bronx County's justices ruled in Worbes Corp v. Sebrow. Justice Fidel Gomez states that if a party who intended to compel arbitration brought a "substantive defense" before the court, served a trial notice, moved to depose a witness, or "interposed a counterclaim demanding money damages", that party would have waived its right to compel arbitration. Justice Gomez, however, clarified that such right would not be waived by a party if a defendant "had only defended its position and had not acted in a manner that waives the right to arbitrate". Unbearable arbitration fees Arbitration clauses can be void in instances where the costs of arbitration would be too high. In 1999's Shankle v. B-G Maintenance Management of Colorado, Inc, the 10th Circuit Court of Appeals refused to grant a motion to compel arbitration on the basis that the fees were too high for the plaintiff Matthew Shankle. The Texas Courts of Appeals found in 2022's Cont'l Homes of Texas v. Perez that due to unaffordable arbitration costs for the plaintiffs and the arbitration agreement not being an adequate remedy for litigation. Severability-related challenges In January 2023, a federal court in Delaware recommended that motions to compel arbitration which conflicted with the Employee Retirement Income Security Act of 1974 not be honored in Burnett et al. v. Prudent Financial Services LLC, et al. (C.A. No. 22-270-RGA-JLH). Presiding magistrate judge Jennifer Hall interpreted that based on recent action by the Supreme Court and other federal courts, not every provision within the arbitration agreement should be validated. Additionally, Judge Hall prospected that entire arbitration agreements could become invalid if a single provision is found to be unenforceable by a court. The notion of a single unconscionable provision invalidating the arbitration agreement, even if such provision was outside of the arbitration-related clauses of a contract, was expanded the following June when a California court ruled in Alberto v. Cambrian Homecare that a confidentiality agreement which prohibited discussing compensation and salary information, and threatened litigation and the collection of attorneys fees, was unenforceable and also declared the arbitration agreement unenforceable. Other challenges In 2014's Atalese v. U.S. Legal Services Group, L.P, the Supreme Court of New Jersey ruled that arbitration clauses must have a valid jury trial waiver, which the court saw as a constitutional right which must be explicitly waived in a contract, in order to be effective, a position reaffirmed by Pennsylvania's Superior Court in 2022's Chiluti v. Uber. A Pennsylvania appeals court in Philadelphia ruled in March 2023 that parents cannot bind their children to arbitration agreements over injuries, in a lawsuit between parents and a local trampoline park. Transportation workers exemption The Federal Arbitration Act also explicitly provides that workers involved in transportation are exempt from arbitration agreements, which the Supreme Court unanimously reaffirmed in various cases, with one notable example being 2022's Southwest Airlines v. Saxon. This, however, does not apply to drivers working for Uber and other ridesharing services. Acts of Congress Ending Forced Arbitration of Sexual Assault and Sexual Harassment Act In 2022, Congress passed the Ending Forced Arbitration of Sexual Assault and Sexual Harassment Act (EFASASHA or EFAA), which excludes these types of complaints from arbitration clauses. Congress also included a ban on class action waivers for claims covered under the act. Under the law, claims which are filed after March 3, 2022 and fall under the scope of EFAA shall have agreements to submit disputes to binding arbitration and class action waivers within contracts signed deemed unenforceable for the entire case, though the law allows for claimants to have a case decided by binding arbitration if the plaintiff wishes upon filing. The law was championed by Gretchen Carlson, a former Fox News host sexually harassed for many years by then CEO Roger Ailes; she also opposed the use of non-disclosure agreements to shield perpetrators. The law was introduced by Illinois House Democrat Cheri Bustos as HR 4445, and passed the House of Representatives by a 335-97 vote, with all no votes coming from Republicans. The EFAA passed the Senate with unanimous consent, and was signed into law by President Joe Biden on March 3, 2022. The law became effective immediately at signing. Some legal agencies raised concerns that the law could allow for claims attached to a sexual harassment or sexual assault dispute to bypass arbitration as well. These concerns were ultimately confirmed in February 2023, where New York federal judge Paul A. Engelmayer ruled in two lawsuits against the company Everyrealm that if at least one claim in a single case was an act of sexual assault or sexual harassment, the pre-dispute arbitration agreement was unenforceable and arbitration could not be compelled. Engelmayer's decision was rooted in the decision from Congress to directly amend the Federal Arbitration Act, and its actions to do so were indicative of its intention to prohibit the practice in entire cases which the EFAA covers; Engelmayer, however, clarified that the claim of sexual assault or harassment must be reasonable and that the EFAA does not enable implausible claims of sexual harassment to be used to "dodge" arbitration agreements. One month later, a California court ruling on a sexual harassment lawsuit filed against Tesla further confirmed the EFAA's ability to ban compelling arbitration in sexual harassment suits, and a second New York federal court earlier came to a similar conclusion in a case filed by an investment banker. Forced Arbitration Injustice Repeal Act The Forced Arbitration Injustice Repeal Act is a bill filed in every meeting of Congress since the 116th Congress which, if passed, contains provisions which ban arbitration agreements and class action waivers in cases between consumers and large companies, as well as employers and large companies. The bill is generally supported by the Democratic Party as well as Freedom Caucus member Matt Gaetz, though has usually been opposed by the Republican Party. In the 116th and 117th congresses, the bill passed the House but failed to pass the Senate; the bill has since been reintroduced in the 118th Congress by Democratic senators Sherrod Brown and Richard Blumenthal, and Democratic representative Hank Johnson. Protecting Older Americans Act The Protecting Older Americans Act is pending legislation first filed in the 118th Congress by South Carolina Republicans Lindsey Graham in the Senate and Nancy Mace in the House. The law would ban and overturn arbitration agreements in cases involving discrimination based on age. Rulings and actions by federal agencies Federal Student Loans In November 2022, the Department of Education and the office on Federal Student Aid passed new rules which included reinstating a ban on institutions participating in its Direct Loan Program from utilizing pre-dispute mandatory arbitration agreements and class action waivers in cases relating to Borrower Defense to Repayment. The new rules also require institutions to disclose their uses of arbitration to the Department and to provide certain records connected with any borrower defense claim against the school to the Department. The Department of Education stated its reasoning for the ban is that class action waivers and arbitration agreements are too complex for much of the general public to comprehend and that arbitration "rarely" gives favorable decisions to consumers.The rules become effective on July 1, 2023. Department of Labor The United States Department of Labor was noted in May 2023 by Bloomberg Law journalist Khorri Atkinson for its increased focus and hostility towards mandatory arbitration and its use by employers for violating Department of Labor rules. Solicitor of Labor Seema Nanda has stated that the Department will pursue more cases where employers are utilizing mandatory arbitration to commit violations of the Fair Labor Standards Act of 1938. Proceedings Various bodies of rules have been developed that can be used for arbitration proceedings. The rules to be followed by the arbitrator are specified by the agreement establishing the arbitration. Enforcement of award In some cases, a party may comply with an award voluntarily. However, in other cases a party will have to petition to receive a court judgment for enforcement through various means such as a writ of execution, garnishment, or lien. If the property is in another state, then a sister-state judgment (relying on the Full Faith and Credit Clause) can be received by filing to enforce the judgment in the state where the property is located. Vacatur Under the Federal Arbitration Act, courts can only vacate awards for limited reasons set out in statute with similar language in the state model Uniform Arbitration Act. The court will generally not change the arbitrator's findings of fact but will decide only whether the arbitrator was guilty of malfeasance, or whether the arbitrator exceeded the limits of his or her authority in the arbitral award or whether the award was made in manifest disregard of law or conflicts with well-established public policy. Arbitration Fairness Act See also Arbitration award Consumer arbitration Conciliation Dispute resolution Epic Systems Corp. v. Lewis Expert determination London Court of International Arbitration Mediation Negotiation Special referee Subrogation Tort reform UNCITRAL Model Law on International Commercial Arbitration National Arbitration Forum National Academy of Arbitrators For the relevant conflict of laws elements, see contract, forum selection clause, choice of law clause, proper law, and lex loci arbitri References Further reading Jerold S. Auerbach, Justice Without Law?: Non-Legal Dispute Settlement in American History (Oxford: Oxford University Press, 1983). Mark J. Astarita, Esq., Introduction to Securities Arbitration (SECLaw.com, 2000 - Securities Arbitration Overview-2023 Update) David Sherwyn, Bruce Tracey & Zev Eigen. "In Defense of Mandatory Arbitration of Employment Disputes: Saving the Baby, Tossing out the Bath Water, and Constructing a New Sink in the Process," 2 U. Pa. J. Lab. & Emp. L. 73 (1999); n.b., abbreviated source in this legal citation format is the University of Pennsylvania Journal of Labor and Employment Law, Vol. 2, p. 73. Ed Brunet, J.D., Arbitration Law in America: A Critical Assessment, Cambridge University Press, 2006. Gary Born, International Civil Litigation in United States Courts (Aspen 4th ed. 2006) (with Bo Rutledge) (3rd ed. 1996) (2nd ed. 1992) (1st ed. 1989) External links Read actual arbitration awards and find arbitrator's resumes at GVSU American Arbitration Association's Home Page An Example of Labor Arbitration in the United States (Vulcan Iron Works and the Machinists' Union, 1981) . United States Law of the United States
2598
https://en.wikipedia.org/wiki/Adversarial%20system
Adversarial system
The adversarial system or adversary system is a legal system used in the common law countries where two advocates represent their parties' case or position before an impartial person or group of people, usually a judge or jury, who attempt to determine the truth and pass judgment accordingly. It is in contrast to the inquisitorial system used in some civil law systems (i.e. those deriving from Roman law or the Napoleonic code) where a judge investigates the case. The adversarial system is the two-sided structure under which criminal trial courts operate, putting the prosecution against the defense. Basic features Adversarial systems are considered to have three basic features. The first is a neutral decision-maker such as a judge or jury. The second is presentation of evidence in support of each party's case, usually by lawyers. The third is a highly structured procedure. The rules of evidence are developed based upon the system of objections of adversaries and on what basis it may tend to prejudice the trier of fact which may be the judge or the jury. In a way the rules of evidence can function to give a judge limited inquisitorial powers as the judge may exclude evidence he or she believes is not trustworthy, or irrelevant to the legal issue at hand. Peter Murphy in his Practical Guide to Evidence recounts an instructive example. A frustrated judge in an English (adversarial) court finally asked a barrister after witnesses had produced conflicting accounts, "Am I never to hear the truth?" "No, my lord, merely the evidence", replied counsel. Parties Judges in an adversarial system are impartial in ensuring the fair play of due process, or fundamental justice. Such judges decide, often when called upon by counsel rather than of their own motion, what evidence is to be admitted when there is a dispute; though in some common law jurisdictions judges play more of a role in deciding what evidence to admit into the record or reject. At worst, abusing judicial discretion would actually pave the way to a biased decision, rendering obsolete the judicial process in question—rule of law being illicitly subordinated by rule of man under such discriminating circumstances. Lord Devlin in The Judge said: "It can also be argued that two prejudiced searchers starting from opposite ends of the field will between them be less likely to miss anything than the impartial searcher starting at the middle." The right to counsel in criminal trials was initially not accepted in some adversarial systems. It was believed that the facts should speak for themselves, and that lawyers would just blur the matters. As a consequence, it was only in 1836 that England gave suspects of felonies the formal right to have legal counsel (the Prisoners' Counsel Act 1836), although in practice, English courts routinely allowed defendants to be represented by counsel from the mid-18th century. During the second half of the 18th century, advocates like Sir William Garrow and Thomas Erskine, 1st Baron Erskine, helped usher in the adversarial court system used in most common law countries today. In the United States, however, personally retained counsel have had a right to appear in all federal criminal cases since the adoption of the United States Constitution, and in state cases at least since the end of the civil war, although nearly all provided this right in their state constitutions or laws much earlier. Appointment of counsel for indigent defendants was nearly universal in federal felony cases, though it varied considerably in state cases. It was not until 1963 that the U.S. Supreme Court declared that legal counsel must be provided at the expense of the state for indigent felony defendants, under the federal Sixth Amendment, in state courts. See Gideon v. Wainwright, . Criminal proceedings In criminal adversarial proceedings, an accused is not compelled to give evidence. Therefore, they may not be questioned by a prosecutor or judge unless they choose to be; however, should they decide to testify, they are subject to cross-examination and could be found guilty of perjury. As the election to maintain an accused person's right to silence prevents any examination or cross-examination of that person's position, it follows that the decision of counsel as to what evidence will be called is a crucial tactic in any case in the adversarial system and hence it might be said that it is a lawyer's manipulation of the truth. Certainly, it requires the skills of counsel on both sides to be fairly equally pitted and subjected to an impartial judge. In some adversarial legislative systems, the court is permitted to make inferences on an accused's failure to face cross-examination or to answer a particular question. This obviously limits the usefulness of silence as a tactic by the defense. In the United States, the Fifth Amendment has been interpreted to prohibit a jury from drawing a negative inference based on the defendant's invocation of his or her right not to testify, and the jury must be so instructed if the defendant requests. By contrast, while defendants in most civil law systems can be compelled to give statements, these statements are not subject to cross-examinations by the prosecution and are not given under oath. This allows the defendant to explain their side of the case without being subject to cross-examination by a skilled opposition. However, this is mainly because it is not the prosecutor but the judge who questions the defendant. The concept of "cross"-examination is entirely due to adversarial structure of the common law. Comparison with inquisitorial systems The name "adversarial system" may be misleading in that it implies it is only within this type of system in which there are opposing prosecution and defense. This is not the case, and both modern adversarial and inquisitorial systems have the powers of the state separated between a prosecutor and the judge and allow the defendant the right to counsel. Indeed, the European Convention on Human Rights and Fundamental Freedoms in Article 6 requires these features in the legal systems of its signatory states. One of the most significant differences between the adversarial system and the inquisitorial system occurs when a criminal defendant admits to the crime. In an adversarial system, there is no more controversy and the case proceeds to sentencing; though in many jurisdictions the defendant must have allocution of her or his crime; an obviously false confession will not be accepted even in common law courts. By contrast, in an inquisitorial system, the fact that the defendant has confessed is merely one more fact that is entered into evidence, and a confession by the defendant does not remove the requirement that the prosecution present a full case. This allows for plea bargaining in adversarial systems in a way that is difficult or impossible in inquisitional system, and many felony cases in the United States are handled without trial through such plea bargains. See also Adversary evaluation Exclusionary rule Parallel thinkingdescribed as a systemic alternative References Further reading Judiciaries Legal systems
2616
https://en.wikipedia.org/wiki/Adware
Adware
Adware, often called advertising-supported software by its developers, is software that generates revenue for its developer by automatically generating online advertisements in the user interface of the software or on a screen presented to the user during the installation process. The software may generate two types of revenue: one is for the display of the advertisement and another on a "pay-per-click" basis, if the user clicks on the advertisement. Some advertisements also act as spyware, collecting and reporting data about the user, to be sold or used for targeted advertising or user profiling. The software may implement advertisements in a variety of ways, including a static box display, a banner display, a full screen, a video, a pop-up ad or in some other form. All forms of advertising carry health, ethical, privacy and security risks for users. The 2003 Microsoft Encyclopedia of Security and some other sources use the term "adware" differently: "any software that installs itself on your system without your knowledge and displays advertisements when the user browses the Internet", i.e., a form of malware. Some software developers offer their software free of charge, and rely on revenue from advertising to recoup their expenses and generate income. Some also offer a version of the software at a fee without advertising. Advertising-supported software In legitimate software, the advertising functions are integrated into or bundled with the program. Adware is usually seen by the developer as a way to recover development costs, and generate revenue. In some cases, the developer may provide the software to the user free of charge or at a reduced price. The income derived from presenting advertisements to the user may allow or motivate the developer to continue to develop, maintain and upgrade the software product. The use of advertising-supported software in business is becoming increasingly popular, with a third of IT and business executives in a 2007 survey by McKinsey & Company planning to be using ad-funded software within the following two years. Advertisement-funded software is also one of the business models for open-source software. Application software Some software is offered in both an advertising-supported mode and a paid, advertisement-free mode. The latter is usually available by an online purchase of a license or registration code for the software that unlocks the mode, or the purchase and download of a separate version of the software. Some software authors offer advertising-supported versions of their software as an alternative option to business organizations seeking to avoid paying large sums for software licenses, funding the development of the software with higher fees for advertisers. Examples of advertising-supported software include Adblock Plus ("Acceptable Ads"), the Windows version of the Internet telephony application Skype, and the Amazon Kindle 3 family of e-book readers, which has versions called "Kindle with Special Offers" that display advertisements on the home page and in sleep mode in exchange for substantially lower pricing. In 2012, Microsoft and its advertising division, Microsoft Advertising, announced that Windows 8, the major release of the Microsoft Windows operating system, would provide built-in methods for software authors to use advertising support as a business model. The idea had been considered since as early as 2005. Most editions of Windows 10 include adware by default. Software as a service Support by advertising is a popular business model of software as a service (SaaS) on the Web. Notable examples include the email service Gmail and other Google Workspace products (previously called Google Apps and G Suite), and the social network Facebook. Microsoft has also adopted the advertising-supported model for many of its social software SaaS offerings. The Microsoft Office Live service was also available in an advertising-supported mode. Definition of Spyware, Consent, and Ethics In the view of Federal Trade Commission staff, there appears to be general agreement that software should be considered "spyware" only if it is downloaded or installed on a computer without the user's knowledge and consent. However, unresolved issues remain concerning how, what, and when consumers need to be told about software installed on their computers. For instance, distributors often disclose in an end-user license agreement that there is additional software bundled with primary software, but some participants did not view such disclosure as sufficient to infer consent. Much of the discussion on the topic involves the idea of informed consent, the assumption being that this standard eliminates any ethical issues with any given software's behavior. However, if a majority of important software, websites and devices were to adopt similar behavior and only the standard of informed consent is used, then logically a user's only recourse against that behavior would become not using a computer. The contract would become an ultimatum - agree or be ostracized from the modern world. This is a form of psychological coercion and presents an ethical problem with using implied or inferred consent as a standard. There are notable similarities between this situation and binding arbitration clauses which have become inevitable in contracts in the United States. Furthermore, certain forms and strategies of advertising have been shown to lead to psychological harm, especially in children. One example is childhood eating disorders - several studies have reported a positive association between exposure to beauty and fashion magazines and an increased level of weight concerns or eating disorder symptoms in girls. Malware The term adware is frequently used to describe a form of malware (malicious software) which presents unwanted advertisements to the user of a computer. The advertisements produced by adware are sometimes in the form of a pop-up, sometimes in an "unclosable window", and sometimes injected into web pages. When the term is used in this way, the severity of its implication varies. While some sources rate adware only as an "irritant", others classify it as an "online threat" or even rate it as seriously as computer viruses and trojans. The precise definition of the term in this context also varies. Adware that observes the computer user's activities without their consent and reports it to the software's author is called spyware. Adwares may collect the personal information of the user, causing privacy concerns. However, most adware operates legally and some adware manufacturers have even sued antivirus companies for blocking adware. Programs have been developed to detect, quarantine, and remove advertisement-displaying malware, including Ad-Aware, Malwarebytes' Anti-Malware, Spyware Doctor and Spybot – Search & Destroy. In addition, almost all commercial antivirus software currently detect adware and spyware, or offer a separate detection module. A new wrinkle is adware (using stolen certificates) that disables anti-malware and virus protection; technical remedies are available. Adware has also been discovered in certain low-cost Android devices, particularly those made by small Chinese firms running on Allwinner systems-on-chip. There are even cases where adware code is embedded deep into files stored on the system and boot partitions, to which removal involves extensive (and complex) modifications to the firmware. In recent years, machine-learning based systems have been implemented to detect malicious adware on Android devices by examining features in the flow of network traffic. See also Malvertising Online advertising Typhoid adware Notes References Online advertising Types of malware
2640
https://en.wikipedia.org/wiki/Ajaccio
Ajaccio
Ajaccio (, , ; French: ; or ; , locally: ; ) is a French commune, prefecture of the department of Corse-du-Sud, and head office of the Collectivité territoriale de Corse (capital city of Corsica). It is also the largest settlement on the island. Ajaccio is located on the west coast of the island of Corsica, southeast of Marseille. The original city went into decline in the Middle Ages, but began to prosper again after the Genoese built a citadel in 1492, to the south of the earlier settlement. After the Corsican Republic was declared in 1755, the Genoese continued to hold several citadels, including Ajaccio, until the French took control of the island. The inhabitants of the commune are known as Ajacciens (men) or Ajacciennes (women). The most famous of these is Napoleon Bonaparte, who was born in Ajaccio in 1769, and whose ancestral home, the Maison Bonaparte, is now a museum. Other dedications to him in the city include Ajaccio Napoleon Bonaparte Airport. Toponymy Several hypotheses have been advanced as to the etymology of the name Ajaccio (Aiacciu in Corsican, Addiazzo on old documents). Among these, the most prestigious suggests that the city was founded by the Greek legendary hero Ajax and named after him. Other more realistic explanations are, for example, that the name could be related to the Tuscan agghiacciu meaning "sheep pens". Another explanation, supported by Byzantine sources from around the year 600 AD called the city Agiation which suggests a possible Greek origin for the word, agathè could mean "good luck" or "good mooring" (this was also the root of the name of the city of Agde). Geography Location Ajaccio is located on the west coast of the island of Corsica, southeast of Marseille. The commune occupies a sheltered position at the foot of wooded hills on the northern shore of the Gulf of Ajaccio between Gravona and the pointe de la Parata and includes the îles Sanguinaires (Bloody Islands). The harbour lies to the east of the original citadel below a hill overlooking a peninsula which protects the harbour in the south where the Quai de la Citadelle and the Jettée de la Citadelle are. The modern city not only encloses the entire harbour but takes up the better part of the Gulf of Ajaccio and in suburban form extends for some miles up the valley of the river Gravona. The flow from that river is nearly entirely consumed as the city's water supply. Many beaches and coves border its territory and the terrain is particularly rugged in the west where the highest point is . Urbanism Although the commune of Ajaccio has a large area (82.03 km2), only a small portion of this is urbanized. Therefore, the urban area of Ajaccio is located in the east of the commune on a narrow coastal strip forming a densely populated arc. The rest of the territory is natural with habitation of little importance and spread thinly. Suburbanization occurs north and east of the main urban area. The original urban core, close to the old marshy plain of Cannes was abandoned in favour of the current city which was built near the Punta della Lechia. It has undergone various improvements, particularly under Napoleon, who originated the two current major structural arteries (the Cours Napoleon oriented north–south and the Cours Grandval oriented east–west). Ajaccio experienced a demographic boom in the 1960s, which explains why 85% of dwellings are post-1949. This is reflected in the layout of the city which is marked by very large areas of low-rise buildings and concrete towers, especially on the heights (Les Jardins de l'Empereur) and in the north of the city - e.g. the waterfront, Les Cannes, and Les Salines. A dichotomy appears in the landscape between the old city and the imposing modern buildings. Ajaccio gives the image of a city built on two different levels. Climate The city has a Mediterranean climate which is Csa in the Köppen climate classification. The average annual sunshine is 2726 hours. There are important local climatic variations, especially with wind exposure and total precipitation, between the city centre, the airport, and the îles Sanguinaires. The annual average rainfall is at the Campo dell'Oro weather station (as per the chart) and at the Parata: the third-driest place in metropolitan France. The heat and dryness of summer are somewhat tempered by the proximity of the Mediterranean Sea except when the sirocco is blowing. In autumn and spring, heavy rain-storm episodes may occur. Winters are mild and snow is rare. Ajaccio is the French city which holds the record for the number of thunderstorms in the reference period 1971–2000 with an average of 39 thunderstorm days per year. On 14 September 2009, the city was hit by a tornado with an intensity of F1 on the Fujita scale. There was little damage except torn billboards, flying tiles, overturned cars, and broken windows but no casualties. Weather Data for Ajaccio Heraldry History Antiquity The city was not mentioned by the Greek geographer Ptolemy of Alexandria in the 2nd century AD despite the presence of a place called Ourkinion in the Cinarca area. It is likely that the city of Ajaccio had its first development at this time. The 2nd century was a period of prosperity in the Mediterranean basin (the Pax Romana) and there was a need for a proper port at the head of the several valleys that lead to the Gulf able to accommodate large ships. Some important underwater archaeological discoveries recently made of Roman ships tend to confirm this. Further excavations conducted recently led to the discovery of important early Christian remains suggest that an upwards reevaluation might be necessary of the size of Ajaccio city in Late Antiquity and the beginning of the Middle Ages. The city was in any case already significant enough to be the seat of a diocese, mentioned by Pope Gregory the Great in 591. The city was then further north than the location chosen later by the Genoese - in the location of the existing quarters of Castel Vecchio and Sainte-Lucie. The earliest certain written record of a settlement at Ajaccio with a name ancestral to its name was the exhortation in Epistle 77 written in 601AD by Gregory the great to the Defensor Boniface, one of two known rectors of the early Corsican church, to tell him not to leave Aléria and Adjacium without bishops. There is no earlier use of the term and Adjacium is not an attested Latin word, which probably means that it is a Latinization of a word in some other language. The Ravenna Cosmography of about 700 AD cites Agiation, which sometimes is taken as evidence of a prior Greek city, as -ion appears to be a Greek ending. There is, however, no evidence at all of a Greek presence on the west coast and the Ionians at Aléria on the east coast had been expelled by the Etruscans long before Roman domination. Ptolemy, who must come the closest to representing indigenous names, lists the Lochra River just south of a feature he calls the "sandy shore" on the southwest coast. If the shore is the Campo dell'Oro (Place of Gold) the Lochra would seem to be the combined mouth of the Gravona and Prunelli Rivers, neither one of which sounds like Lochra. North of there was a Roman city, Ourchinion. The western coastline was so distorted, however, that it is impossible to say where Adjacium was; certainly, he would have known its name and location if he had had any first-hand knowledge of the island and if in fact it was there. Ptolemy's Ourchinion is further north than Ajaccio and does not have the same name. It could be Sagone. The lack of correspondence between Ptolemaic and historical names known to be ancient has no defense except in the case of the two Roman colonies, Aleria and Mariana. In any case the population of the region must belong to Ptolemy's Tarabeni or Titiani people, neither of which are ever heard about again. Archaeological evidence The population of the city throughout the centuries maintained an oral tradition that it had originally been Roman. Travellers of the 19th century could point to the Hill of San Giovanni on the northwest shore of the Gulf of Ajaccio, which still had a cathedral said to have been the 6th-century seat of the Bishop of Ajaccio. The Castello Vecchio ("old castle"), a ruined citadel, was believed to be Roman but turned out to have Gothic features. The hill was planted with vines. The farmers kept turning up artifacts and terracotta funerary urns that seemed to be Roman. In the 20th century, the hill was covered over with buildings and became a part of downtown Ajaccio. In 2005 construction plans for a lot on the hill offered the opportunity to the Institut national de recherches archéologiques preventatives (Inrap) to excavate. They found the baptistry of a 6th-century cathedral and large amounts of pottery dated to the 6th and 7th centuries AD; in other words, an early Christian town. A cemetery had been placed over the old church. In it was a single Roman grave covered over with roof tiles bearing short indecipherable inscriptions. The finds of the previous century had included Roman coins. This is the only evidence so far of a Roman city continuous with the early Christian one. Medieval Genoese period It has been established that after the 8th century the city, like most other Corsican coastal communities, strongly declined and disappeared almost completely. Nevertheless, a castle and a cathedral were still in place in 1492 which last was not demolished until 1748. Towards the end of the 15th century, the Genoese were eager to assert their dominance in the south of the island and decided to rebuild the city of Ajaccio. Several sites were considered: the Pointe de la Parata (not chosen because it was too exposed to the wind), the ancient city (finally considered unsafe because of the proximity of the salt ponds), and finally the Punta della Lechia which was finally selected. Work began on the town on 21 April 1492 south of the Christian village by the Bank of Saint George at Genoa, who sent Cristoforo of Gandini, an architect, to build it. He began with a castle on Capo di Bolo, around which he constructed residences for several hundred people. The new city was essentially a colony of Genoa. The Corsicans were restricted from the city for some years. Nevertheless, the town grew rapidly and became the administrative capital of the province of Au Delà Des Monts (more or less the current Corse-du-Sud). Bastia remained the capital of the entire island. Although at first populated exclusively by the Genoese, the city slowly opened to the Corsicans while the Ajaccians, almost to the French conquest, were legally citizens of the Republic of Genoa and were happy to distinguish themselves from the insular paesani who lived mainly in Borgu, a suburb outside the city walls (the current rue Fesch was the main street). Attachment to France Ajaccio was occupied from 1553 to 1559 by the French, but it again fell to the Genoese after the Treaty of Cateau Cambresis in the latter year. Subsequently, the Republic of Genoa was strong enough to keep Corsica until 1755, the year Pasquale Paoli proclaimed the Corsican Republic. Paoli took most of the island for the republic, but he was unable to force Genoese troops out of the citadels of Saint-Florent, Calvi, Ajaccio, Bastia and Algajola. Leaving them there, he went on to build the nation, while the Republic of Genoa was left to ponder prospects and solutions. Their ultimate solution was to sell Corsica to France in 1768 and French troops of the Ancien Régime replaced Genoese ones in the citadels, including Ajaccio's. Corsica was formally annexed to France in 1780. Napoleon Napoleon Bonaparte (born as Napoleone di Buonaparte) was born at Ajaccio in the same year as the Battle of Ponte Novu, 1769. The Buonaparte family at the time had a modest four-story home in town (now a museum known as Maison Bonaparte) and a rarely used country home in the hills north of the city (now site of the Arboretum des Milelli). The father of the family, attorney Carlo di Buonaparte, was secretary to Pasquale Paoli during the Corsican Republic. After the defeat of Paoli, the Comte de Marbeuf began to meet with some leading Corsicans to outline the shape of the future and enlist their assistance. The Comte was among a delegation from Ajaccio in 1769, offered his loyalty and was appointed assessor. Marbeuf also offered Carlo di Buonaparte an appointment for one of his sons to the Military College of Brienne, but Napoleone did not speak French which was a requirement and he had to be at least ten years of age. There is a dispute concerning Napoleon's age because of this requirement; the emperor is known to have altered the civic records at Ajaccio concerning himself and it is possible that he was born in Corte in 1768 when his father was there on business. In any case Napoleon was sent to a school in Autun to learn basic French, then after a year went to Brienne from 1779 to 1784. At Brienne Napoleon concentrated on studies. He wrote a boyish history of Corsica. He did not share his father's views but held Pasquale Paoli in high esteem and was at heart a Corsican nationalist. The top students were encouraged to go into the artillery. After graduation and a brief sojourn at the Military School of Paris Napoleon applied for a second-lieutenancy in the artillery regiment of La Fère at Valence and after a time was given the position. Meanwhile, his father died and his mother was cast into poverty in Corsica, still having four children to support. Her only income was Napoleon's meager salary. The regiment was in Auxonne when the revolution broke out in the summer of 1789. Napoleon returned on leave to Ajaccio in October, became a Jacobin and began to work for the revolution. The National Assembly in Paris united Corsica to France and pardoned its exiles. Paoli returned in 1790 after 21 years and kissed the soil on which he stood. He and Napoleon met and toured the battlefield of Paoli's defeat. A national assembly at Orezza created the department of Corsica and Paoli was subsequently elected president. He commanded the national guard raised by Napoleon. After a brief return to his regiment Napoleon was promoted to first lieutenant and came home again on leave in 1791. All officers were recalled from leave in 1792, intervention threatened and war with Austria (Marie-Antoinette's homeland) began. Napoleon returned to Paris for review, was exonerated, then promoted to captain and given leave to escort his sister, a schoolgirl, back to Corsica at state expense. His family was prospering; his estate increased. Napoleon became a lieutenant-colonel in the Corsican National Guard. Paoli sent him off on an expedition to Sardinia, ordered by France, under Paolis's nephew but the nephew had secret orders from Paoli to make sure the expedition failed. Paoli was now a conservative, opposing the execution of the king and supporting an alliance with Great Britain. Returning from Sardinia Napoleon with his family and all his supporters were instrumental in getting Paoli denounced at the National Convention in Paris in 1793. Napoleon earned the hatred of the Paolists by pretending to support Paoli and then turning against him (payment, one supposes, for Sardinia). Paoli was convicted in absentia, a warrant was issued for his arrest (which could not be served) and Napoleon was dispatched to Corsica as Inspector General of Artillery to take the citadel of Ajaccio from the royalists who had held it since 1789. The Paolists combining with the royalists defeated the French in two pitched battles and Napoleon and his family went on the run, hiding by day, while the Paolists burned their estate. Napoleon and his mother, Laetitia, were taken out by ship in June 1793, by friends while two of the girls found refuge with other friends. They landed in Toulon with only Napoleon's pay for their support. The Bonapartes moved to Marseille but in August Toulon offered itself to the British and received the protection of a fleet under Admiral Hood. The Siege of Toulon began in September under revolutionary officers mainly untrained in the art of war. Napoleon happened to present socially one evening and during a casual conversation over a misplaced 24-pounder explained the value of artillery. Taken seriously he was allowed to bring up over 100 guns from coastal emplacements but his plan for the taking of Toulon was set aside as one incompetent officer superseded another. By December they decided to try his plan and made him a Colonel. Placing the guns at close range he used them to keep the British fleet away while he battered down the walls of Toulon. As soon as the Committee of Public Safety heard of the victory Napoleon became a brigadier general, the start of his meteoric rise to power. The Bonapartes were back in Ajaccio in 1797 under the protection of General Napoleon. Soon after Napoleon became First Consul and then emperor, using his office to spread revolution throughout Europe. In 1811 he made Ajaccio the capital of the new Department of Corsica. Despite his subsequent defeat by the Prussians, Russians, and British, his exile and his death, no victorious power reversed that decision or tried to remove Corsica from France. Among the natives, though Corsican nationalism is strong, and feeling often runs high in favour of a union with Italy; loyalty to France, however, as evidenced by elections, remains stronger. 19th and 20th centuries In the 19th century Ajaccio became a winter resort of the high society of the time, especially for the English, in the same way as Monaco, Cannes, and Nice. An Anglican Church was even built. The first prison in France for children was built in Ajaccio in 1855: the Horticultural colony of Saint Anthony. It was a correctional colony for juvenile delinquents (from 8 to 20 years old), established under Article 10 of the Act of 5 August 1850. Nearly 1,200 children from all over France stayed there until 1866, when it was closed. Sixty percent of them perished, the victims of poor sanitation and malaria which infested the unhealthy areas that they were responsible to clean. Contemporary history On 9 September 1943, the people of Ajaccio rose up against the Nazi occupiers and became the first French town to be liberated from the domination of the Germans. General Charles de Gaulle went to Ajaccio on 8 October 1943 and said: "We owe it to the field of battle the lesson of the page of history that was written in French Corsica. Corsica to her fortune and honour is the first morsel of France to be liberated; which was done intentionally and willingly, in the light of its liberation, this demonstrates that these are the intentions and the will of the whole nation." Throughout this period, no Jew was executed or deported from Corsica through the protection afforded by its people and its government. This event now allows Corsica to aspire to the title "Righteous Among the Nations", as no French region except for the commune Le Chambon-sur-Lignon in Haute-Loire carries this title. Their case is being investigated . Since the middle of the 20th century, Ajaccio has seen significant development. The city has seen population growth and considerable urban sprawl. Today Ajaccio is the capital of Corsica and the main town of the island and seeks to establish itself as a true regional centre. Ajaccio was a hotspot for violence during the violent unrest in March 2022. Economy The city is, with Bastia, the economic, commercial and administrative centre of Corsica. Its urban area of nearly 90,000 inhabitants is spread over a large part of the Corse-du-Sud, on either side of the Gulf of Ajaccio and up the valley of the Gravona. Its business is primarily oriented towards the services sector. The services sector is by far the main source of employment in the city. Ajaccio is an administrative centre comprising communal, intercommunal, departmental, regional, and prefectural services. It is also a shopping centre with the commercial streets of the city centre and the areas of peripheral activities such as that of Mezzavia (hypermarket Géant Casino) and along the ring road (hypermarket Carrefour and E. Leclerc). Tourism is one of the most vital aspects of the economy, split between the seaside tourism of summer, cultural tourism, and fishing. A number of hotels, varying from one star to five star, are present across the commune. Ajaccio is the seat of the Chamber of Commerce and Industry of Ajaccio and Corsica South. It manages the ports of Ajaccio, Bonifacio, Porto-Vecchio, Propriano and the Tino Rossi marina. It also manages Ajaccio airport and Figari airport as well as the convention centre and the Centre of Ricanto. Secondary industry is underdeveloped, apart from the aeronautical company Corsica Aerospace Composites CCA, the largest company on the island with 135 employees at two sites. The storage sites of GDF Suez (formerly Gaz de France) and Antargaz in the district of Vazzio are classified as high risk. Energy The Centrale EDF du Vazzio, a heavy oil power station, provides the south of the island with electricity. The Gravona Canal delivers water for consumption by the city. Transport Road access By road, the city is accessible from National Route NR194 from Bastia and NR193 via NR196 from Bonifacio. These two main axes, as well as the roads leading to suburban villages, connect Ajaccio from the north - the site of Ajaccio forming a dead end blocked by the sea to the south. Only the Cours Napoleon and the Boulevard du Roi Jerome cross the city. Along with the high urban density, this explains the major traffic and parking problems especially during peak hours and during the summer tourist season. A bypass through several neighbourhoods is nearing completion. Communal bus services The Muvistrada provide services on 21 urban routes, one "city" route for local links and 20 suburban lines. The frequency varies according to demand with intervals of 30 minutes for the most important routes: A park and ride with 300 spaces was built at Mezzana in the neighbouring commune of Sarrola-Carcopino in order to promote intermodality between cars and public transport. It was inaugurated on 12 July 2010. Airport The city is served by an Ajaccio Napoleon Bonaparte Airport which is the headquarters of Air Corsica, a Corsican airline. It connects Ajaccio to a number of cities in mainland France (including Paris, Marseille, Nice, and Brive) and to places in Europe to serve the tourist industry. The airline CCM Airlines also has its head office on the grounds of the Airport. Port The port of Ajaccio is connected to the French mainland on an almost daily basis (Marseille, Toulon, Nice). There are also occasional links to the Italian mainland (Livorno) and to Sardinia, as well as a seasonal service serving Calvi and Propriano. The two major shipping companies providing these links are Corsica Linea and Corsica Ferries. Ajaccio has also become a stopover for cruises with a total of 418,086 passengers in 2007by far the largest in Corsica and the second-largest in France (after Marseille, but ahead of Nice/Villefranche-sur-Mer and Cannes). The goal is for Ajaccio to eventually become the premier French port for cruises as well as being a main departure point. The Port function of the city is also served by the commercial, pleasure craft, and artisanal fisheries (3 ports). Railways The railway station in Ajaccio belongs to Chemins de fer de la Corse and is located near the port at the Square Pierre Griffi. It connects Ajaccio to Corte, Bastia (3 h 25 min) and Calvi. There are two optional stops: Salines Halt north of the city in the district of the same name Campo dell'Oro Halt near the airport In addition, the municipality has introduced an additional commuter service between Mezzana station in the suburbs and Ajaccio station located in the centre. Administration Ajaccio was successively: Capital of the district of the department of Corsica in 1790 to 1793 Capital of the department of Liamone from 1793 to 1811 Capital of the department of Corsica from 1811 to 1975 Capital of the region and the collectivité territoriale de Corse since 1970 and the department of Corse-du-Sud since 1976 Policy Ajaccio remained (with some interruptions) an electoral stronghold of the Bonapartist (CCB) party until the municipal elections of 2001. The outgoing municipality was then beaten by a left-wing coalition led by Simon Renucci which gathered Social Democrats, Communists, and Charles Napoleon - the pretender to the imperial throne. List of Successive Mayors of Ajaccio Quarters 10 Quarters are recognized by the municipality. Cannes-Binda: an area north of the city, consisting of Housing estates, classed as a Sensitive urban zone (ZUS) with Les Salines, subject to a policy of urban renewal Centre Ville: The tourist heart of the city consisting of shopping streets and major thoroughfares Casone: a bourgeois neighbourhood with an affluent population located in the former winter resort on the heights of the southern city. Les Jardins de l'Empereur: a neighbourhood classified as a Sensitive urban zone (ZUS) on the heights of the city, consisting of housing estates overlooking the city Mezzavia: northern quarter of the town with several subdivisions and areas of business and economic activities Octroi-Sainte Lucie: constitutes the northern part of the city centre near the port and the railway station Pietralba: quarter northeast of the city, classified ZUS Résidence des Îles: quarter to the south of the city near the tourist route of Sanguinaires in a quality environment Saint-Jean: collection of buildings for a population with low incomes, close to the historic urban core of the city, classified as a Sensitive urban zone (ZUS) Saline: quarter north of the city, consisting of large apartment blocks, classed as a Sensitive urban zone (ZUS) with Les Cannes, subject to a policy of urban renewal Vazzio: quarter northeast of the city, near the airport, the EDF Central, and the Francois Coty stadium. Intercommunality Since December 2001, Ajaccio has been part of the Communauté d'agglomération du Pays Ajaccien with nine other communes: Afa, Alata, Appietto, Cuttoli-Corticchiato, Peri, Sarrola-Carcopino, Tavaco, Valle-di-Mezzana, and Villanova. Origins The geopolitical arrangements of the commune are slightly different from those typical of Corsica and France. Usually an arrondissement includes cantons and a canton includes one to several communes including the chef-lieu, "chief place", from which the canton takes its name. The city of Ajaccio is one commune, but it contains four cantons, Cantons 1–4, and a fraction of Canton 5. The latter contains three other communes: Bastelicaccia, Alata and Villanova, making a total of four communes for the five cantons of Ajaccio. Each canton contains a certain number of quartiers, "quarters". Cantons 1, 2, 3, 4 are located along the Gulf of Ajaccio from west to east, while 5 is a little further up the valleys of the Gravona and the Prunelli Rivers. These political divisions subdivide the population of Ajaccio into units that can be more democratically served but they do not give a true picture of the size of Ajaccio. In general language, "greater Ajaccio" includes about 100,000 people with all the medical, educational, utility and transportational facilities of a big city. Up until World War II it was still possible to regard the city as being a settlement of narrow streets localized to a part of the harbour or the Gulf of Ajaccio: such bucolic descriptions do not fit the city of today, and travelogues intended for mountain or coastal recreational areas do not generally apply to Corsica's few big cities. The arrondissement contains other cantons that extend generally up the two rivers into central Corsica. Twin towns – sister cities Ajaccio is twinned with: La Maddalena, Italy (1991) Population The population of Ajaccio increased sharply after 1960 due to migration from rural areas and the coming of "Pied-Noirs" (French Algerians), immigrants from the Maghreb and French from mainland France. Health Ajaccio has three hospital sites: the Misericordia Hospital, built in 1950, is located on the heights of the city centre. This is the main medical facility in the region. The Annex Eugenie. the Psychiatric Hospital of Castelluccio is west of the city centre and is also home of cancer services and long-stay patients. Education Ajaccio is the headquarters of the Academy of Corsica. The city of Ajaccio has: 18 nursery schools (16 public and 2 private) 17 primary schools (15 public and 2 private) 6 colleges 5 Public Schools: Collège Arthur-Giovoni Collège des Padule Collège Laetitia Bonaparte Collège Fesch EREA 1 Private School: Institution Saint Paul 3 sixth-form colleges/senior high schools 2 public schools: Lycée Laetitia Bonaparte Lycée Fesch 1 private: Institution Saint Paul 2 LEP (vocational high schools) Lycée Finosello Lycée Jules Antonini Higher education is undeveloped except for a few BTS and IFSI, the University of Corsica Pascal Paoli is located in Corte. A research facility of INRA is also located on Ajaccio. Culture and heritage Ajaccio has a varied tourism potential, with both a cultural framework in the centre of the city and a natural heritage around the coves and beaches of the Mediterranean Sea, as well as the Natura 2000 reserve of the îles Sanguinaires. Civil heritage The commune has many buildings and structures that are registered as historical monuments: The Monument to General Abbatucci in the Place Abbatucci (1854) The Monument to Napoleon I in the Place d'Austerlitz (20th century) The Baciocchi Family Mansion at 9 Rue Bonaparte (18th century) The Fesch Palace at 48 bis Rue Cardinal-Fesch (1827) The Monument to the First Consul in the Place Foch (1850) The Peraldi House at 18 Rue Forcioli-Conti (1820) The Grand Hotel at Cours Grandval (1869) The old Château Conti at Cours Grandval (19th century) The Monument to Napoleon and his brothers in the Place du General de Gaulle (1864) The Monument to Cardinal Fesch at the Cour du Musée Fesch (1856) The old Alban Factory at 89 Cours Napoleon (1913) The Milelli House in the Saint-Antoine Quarter (17th century) The Hotel Palace-Cyrnos (1880), an old Luxury Hotel from the 19th century and a famous palace of the old days in the quarter "for foreigners" now converted into housing. The Lantivy Palace (1837), an Italian palace now headquarters of the prefecture of Corsica. The Hotel de Ville (1836) Napoleon Bonaparte's House (17th century) now a national museum: the Maison Bonaparte The old Lazaretto of Aspretto (1843) The Citadel (1554) The Sawmill at Les Salines (1944) The Lighthouse on the Sanguinaires Islands (1844) Other sites of interest The Monument in the Place du Casone The old town and the Borgu are typically Mediterranean with their narrow streets and picturesque buildings The Place Bonaparte, a quarter frequented chiefly by winter visitors attracted by the mild climate of the town The Musée Fesch houses a large collection of Italian Renaissance paintings The Bandera Museum, a History Museum of Mediterranean Corsica The Municipal library, in the north wing of Musée Fesch, has early printed books from as early as the 14th century The area known as the Foreigners' Quarter has a number of old palaces, villas, and buildings once built for the wintering British in the Belle Époque such as the Anglican Church and the Grand Hotel Continental. Some of the buildings are in bad condition and very degraded, others were destroyed for the construction of modern buildings. The Genoese towers: Torra di Capu di Fenu, Torra di a Parata, and Torra di Castelluchju in the Îles Sanguinaires archipelago The Square Pierre Griffi (in front of the railway station), named after a hero of the Corsican Resistance and one of the members of the , the first operation launched in occupied Corsica to coordinate resistance The Statue of Commandant Jean L'Herminier (in front of the ferry terminal), commander of the French submarine Casabianca (1935) which actively participated in the struggle for the liberation of Corsica in September 1943 Religious heritage The town is the seat of a bishopric dating at least from the 7th century. It has tribunals of first instance and of commerce, training colleges, a communal college, a museum and a library; the three latter are established in the Palais Fesch, founded by Cardinal Fesch, who was born at Ajaccio in 1763. The commune has several religious buildings and structures that are registered as historical monuments: The former Episcopal Palace at 24 Rue Bonaparte (1622) The Oratory of Saint Roch at Rue Cardinal-Fesch (1599) The Chapel of Saint Erasme or Sant'Erasmu at 22 Rue Forcioli-Conti (17th century) The Oratory of Saint John the Baptist at Rue du Roi-de-Dome (1565) The Cathedral of Santa Maria Assunta at Rue Saint-Charles (1582) from the Renaissance which depended on the diocese of Ajaccio and where Napoleon was baptized with its organ from Cavaillé-Coll. The Chapel of the Greeks on the Route des Sanguiunaires (1619) The Early Christian Baptistery of Saint John (6th century) The Imperial Chapel (1857) houses the graves of Napoleon's parents and his brothers and sisters. Other religious sites of interest The Church of Saint Roch, Neoclassical architecture by Ajaccien project architect Barthélémy Maglioli (1885) Environmental heritage Sanguinaires Archipelago: The Route des Sanguinaires runs along the southern coast of the city after the Saint François Beach. It is lined with villas and coves and beaches. Along the road is the Ajaccio cemetery with the grave of Corsican singer Tino Rossi. At the mouth of the Route des Sanguinaires is the Pointe de la Parata near the archipelago and the lighthouse. The Sentier des Crêtes (Crest Trail) starts from the city centre and is an easy hike offering splendid views of the Gulf of Ajaccio. The shores of the Gulf are dotted with a multitude of small coves and beaches ideal for swimming and scuba diving. Many small paths traversing the maquis (high ground covered in thick vegetation) in the commune from which the Maquis resistance network was named. Interests The city has two marinas and a casino. The main activities are concentrated in the city centre on the Route des Sanguinaires (cinemas, bars, clubs etc.). In popular culture Films made in Ajaccio include: Napoléon, one of the last successful French silent films by Abel Gance in 1927. Les Radonneurs, a French film directed by Philippe Harel in 1997. Les Sanguinaires, a film by Laurent Cantet in 1998. The Amazing Race, an American TV series by Elise Doganieri and Bertram van Munster in 2001 (season 6 episode 9). L'Enquête Corse, directed by Alain Berberian in 2004. Trois petites filles, a French film directed by Jean-Loup Hubert in 2004. Joueuse (Queen to Play), a French film directed by Caroline Bottaro in 2009. Sports There are various sports facilities developed throughout the city. AC Ajaccio is a French Ligue 2 football club which plays at the Stade François Coty (13,500 seats) in the north-east of the city Gazélec Football Club Ajaccio, in Championnat National, football club which plays at the Stade Ange Casanova located at Mezzavia, 2,900 seats. GFCO Ajaccio handball GFCO Ajaccio Volleyball GFCO Ajaccio Basketball Vignetta Racecourse Notable people Carlo Buonaparte (1746–1785), politician, father of Napoleon Bonaparte Felice Pasquale Baciocchi (1762–1841), general of the armies of the Revolution and the Empire, brother in law of the Emperor Napoleon 1st, Grand Duke of Tuscany Joseph Fesch (1763–1839), cardinal Joseph Bonaparte (1768-1844), French statesman, King of Naples, King of Spain Napoleon Bonaparte (1769–1821), Emperor of France Lucien Bonaparte (1775–1840), Prince of Canino and Musignano, Interior Minister of France Elisa Bonaparte (1777–1820), Grand Duchess of Tuscany Louis Bonaparte (1778–1846), King of Holland Pauline Bonaparte (1780-1825), Duchess of Guastalla, Princess Consort of Sulmona and Rossano Caroline Bonaparte (1782–1839), Queen Consort of Naples and Sicily Jérôme Bonaparte (1784–1860), King of Westphalia François Coty (1874–1934), perfumer, businessman, newspaper publisher and politician Irène Bordoni (1895–1953), singer and actress Tino Rossi (1907–1983), singer and actor Michel Giacometti (1929–1990), ethnomusicologist François Duprat (1941–1978), writer Michel Ferracci-Porri (born 1949), writer Jean-Michel Cavalli (born 1959), football player and manager Alizée (born 1984), singer Military Units that were stationed in Ajaccio: 163rd Infantry Regiment, 1906 173rd Infantry Regiment The Aspretto naval airbase for seaplanes 1938–1993 Gallery See also Diocese of Ajaccio Communes of the Corse-du-Sud department References External links Official website The Communauté d'Agglomération du Pays Ajaccien (CAPA) website Tourism Office of Ajaccio website Tourist Info Visit Ajaccio Communes of Corse-du-Sud Prefectures in France
2642
https://en.wikipedia.org/wiki/Ajanta%20Caves
Ajanta Caves
The Ajanta Caves are 29 rock-cut Buddhist cave monuments dating from the second century BCE to about 480 CE in the Aurangabad District of Maharashtra state in India. Ajanta Caves are a UNESCO World Heritage Site. Universally regarded as masterpieces of Buddhist religious art, the caves include paintings and rock-cut sculptures described as among the finest surviving examples of ancient Indian art, particularly expressive paintings that present emotions through gesture, pose and form. The caves were built in two phases, the first starting around the second century BCE and the second occurring from 400 to 650 CE, according to older accounts, or in a brief period of 460–480 CE according to later scholarship. The Ajanta Caves constitute ancient monasteries (Viharas) and worship-halls (Chaityas) of different Buddhist traditions carved into a wall of rock. The caves also present paintings depicting the past lives and rebirths of the Buddha, pictorial tales from Aryasura's Jatakamala, and rock-cut sculptures of Buddhist deities. Textual records suggest that these caves served as a monsoon retreat for monks, as well as a resting site for merchants and pilgrims in ancient India. While vivid colours and mural wall paintings were abundant in Indian history as evidenced by historical records, Caves 1, 2, 16 and 17 of Ajanta form the largest corpus of surviving ancient Indian wall-paintings. The Ajanta Caves are mentioned in the memoirs of several medieval-era Chinese Buddhist travellers. They were covered by jungle until accidentally "discovered" and brought to Western attention in 1819 by a colonial British officer Captain John Smith on a tiger-hunting party. The caves are in the rocky northern wall of the U-shaped gorge of the river Waghur, in the Deccan plateau. Within the gorge are a number of waterfalls, audible from outside the caves when the river is high. With the Ellora Caves, Ajanta is one of the major tourist attractions of Maharashtra. It is about from the city of Jalgaon, Maharashtra, India, from the city of Aurangabad, and east-northeast of Mumbai. Ajanta is from the Ellora Caves, which contain Hindu, Jain and Buddhist caves, the last dating from a period similar to Ajanta. The Ajanta style is also found in the Ellora Caves and other sites such as the Elephanta Caves, Aurangabad Caves, Shivleni Caves and the cave temples of Karnataka. History The Ajanta Caves are generally agreed to have been made in two distinct phases; first during the 2nd century BCE to 1st century CE, and second several centuries later. The caves consist of 36 identifiable foundations, some of them discovered after the original numbering of the caves from 1 through 29. The later-identified caves have been suffixed with the letters of the alphabet, such as 15A, identified between originally numbered caves 15 and 16. The cave numbering is a convention of convenience, and does not reflect the chronological order of their construction. Caves of the first (Satavahana) period The earliest group consists of caves 9, 10, 12, 13 and 15A. The murals in these caves depict stories from the Jatakas. Later caves reflect the artistic influence of the Gupta period, but there are differing opinions on which century in which the early caves were built. According to Walter Spink, they were made during the period 100 BCE to 100 CE, probably under the patronage of the Hindu Satavahana dynasty (230 BCE – c. 220 CE) who ruled the region. Other datings prefer the period of the Maurya Empire (300 BCE to 100 BCE). Of these, caves 9 and 10 are stupa containing worship halls of chaitya-griha form, and caves 12, 13, and 15A are vihāras (see the architecture section below for descriptions of these types). The first Satavahana period caves lacked figurative sculpture, emphasizing the stupa instead. According to Spink, once the Satavahana period caves were made, the site was not further developed for a considerable period until the mid-5th century. However, the early caves were in use during this dormant period, and Buddhist pilgrims visited the site, according to the records left by Chinese pilgrim Faxian around 400 CE. Caves of the later or Vākāṭaka period The second phase of construction at the Ajanta Caves site began in the 5th century. For a long time it was thought that the later caves were made over an extended period from the 4th to the 7th centuries CE, but in recent decades a series of studies by the leading expert on the caves, Walter M. Spink, have argued that most of the work took place over the very brief period from 460 to 480 CE, during the reign of Hindu Emperor Harishena of the Vākāṭaka dynasty. This view has been criticised by some scholars, but is now broadly accepted by most authors of general books on Indian art, for example, Huntington and Harle. The second phase is attributed to the theistic Mahāyāna, or Greater Vehicle tradition of Buddhism. Caves of the second period are 1–8, 11, 14–29, some possibly extensions of earlier caves. Caves 19, 26, and 29 are chaitya-grihas, the rest viharas. The most elaborate caves were produced in this period, which included some refurbishing and repainting of the early caves. Spink states that it is possible to establish dating for this period with a very high level of precision; a fuller account of his chronology is given below. Although debate continues, Spink's ideas are increasingly widely accepted, at least in their broad conclusions. The Archaeological Survey of India website still presents the traditional dating: "The second phase of paintings started around 5th–6th centuries A.D. and continued for the next two centuries". According to Spink, the construction activity at the incomplete Ajanta Caves was abandoned by wealthy patrons in about 480 CE, a few years after the death of Harishena. However, states Spink, the caves appear to have been in use for a period of time as evidenced by the wear of the pivot holes in caves constructed close to 480 CE. The second phase of constructions and decorations at Ajanta corresponds to the very apogee of Classical India, or India's golden age. However, at that time, the Gupta Empire was already weakening from internal political issues and from the assaults of the Hūṇas, so that the Vakatakas were actually one of the most powerful empires in India. Some of the Hūṇas, the Alchon Huns of Toramana, were precisely ruling the neighbouring area of Malwa, at the doorstep of the Western Deccan, at the time the Ajanta caves were made. Through their control of vast areas of northwestern India, the Huns may actually have acted as a cultural bridge between the area of Gandhara and the Western Deccan, at the time when the Ajanta or Pitalkhora caves were being decorated with some designs of Gandharan inspiration, such as Buddhas dressed in robes with abundant folds. According to Richard Cohen, a description of the caves by 7th-century Chinese traveler Xuanzang and scattered medieval graffiti suggest that the Ajanta Caves were known and probably in use subsequently, but without a stable or steady Buddhist community presence. The Ajanta caves are mentioned in the 17th-century text Ain-i-Akbari by Abu al-Fazl, as twenty four rock-cut cave temples each with remarkable idols. Colonial era On 28 April 1819 a British officer named John Smith, of the 28th Cavalry, while hunting tigers was shown the entrance to Cave No. 10 when a local shepherd boy guided him to the location and the door. The caves were well known by locals already. Captain Smith went to a nearby village and asked the villagers to come to the site with axes, spears, torches, and drums, to cut down the tangled jungle growth that made entering the cave difficult. He then deliberately damaged an image on the wall by scratching his name and the date over the painting of a bodhisattva. Since he stood on a five-foot high pile of rubble collected over the years, the inscription is well above the eye-level gaze of an adult today. A paper on the caves by William Erskine was read to the Bombay Literary Society in 1822. Within a few decades, the caves became famous for their exotic setting, impressive architecture, and above all their exceptional and unique paintings. A number of large projects to copy the paintings were made in the century after rediscovery. In 1848, the Royal Asiatic Society established the "Bombay Cave Temple Commission" to clear, tidy and record the most important rock-cut sites in the Bombay Presidency, with John Wilson as president. In 1861 this became the nucleus of the new Archaeological Survey of India. During the colonial era, the Ajanta site was in the territory of the princely state of the Hyderabad and not British India. In the early 1920s, Mir Osman Ali Khan the last Nizam of Hyderabad appointed people to restore the artwork, converted the site into a museum and built a road to bring tourists to the site for a fee. These efforts resulted in early mismanagement, states Richard Cohen, and hastened the deterioration of the site. Post-independence, the state government of Maharashtra built arrival, transport, facilities, and better site management. The modern Visitor Center has good parking facilities and public conveniences and ASI operated buses run at regular intervals from Visitor Center to the caves. The Nizam's Director of Archaeology obtained the services of two experts from Italy, Professor Lorenzo Cecconi, assisted by Count Orsini, to restore the paintings in the caves. The Director of Archaeology for the last Nizam of Hyderabad said of the work of Cecconi and Orsini: Despite these efforts, later neglect led to the paintings degrading in quality once again. Since 1983, Ajanta caves have been listed among the UNESCO World Heritage Sites of India. The Ajanta Caves, along with the Ellora Caves, have become the most popular tourist destination in Maharashtra, and are often crowded at holiday times, increasing the threat to the caves, especially the paintings. In 2012, the Maharashtra Tourism Development Corporation announced plans to add to the ASI visitor centre at the entrance complete replicas of caves 1, 2, 16 & 17 to reduce crowding in the originals, and enable visitors to receive a better visual idea of the paintings, which are dimly-lit and hard to read in the caves. Sites and monasteries Sites The caves are carved out of flood basalt and granite rock of a cliff, part of the Deccan Traps formed by successive volcanic eruptions at the end of the Cretaceous geological period. The rock is layered horizontally, and somewhat variable in quality. This variation within the rock layers required the artists to amend their carving methods and plans in places. The inhomogeneity in the rock has also led to cracks and collapses in the centuries that followed, as with the lost portico to cave 1. Excavation began by cutting a narrow tunnel at roof level, which was expanded downwards and outwards; as evidenced by some of the incomplete caves such as the partially-built vihara caves 21 through 24 and the abandoned incomplete cave 28. The sculpture artists likely worked at both excavating the rocks and making the intricate carvings of pillars, roof, and idols; further, the sculpture and painting work inside a cave were integrated parallel tasks. A grand gateway to the site was carved, at the apex of the gorge's horseshoe between caves 15 and 16, as approached from the river, and it is decorated with elephants on either side and a nāga, or protective Naga (snake) deity. Similar methods and application of artist talent is observed in other cave temples of India, such as those from Hinduism and Jainism. These include the Ellora Caves, Ghototkacha Caves, Elephanta Caves, Bagh Caves, Badami Caves, Aurangabad Caves and Shivleni Caves. The caves from the first period seem to have been paid for by a number of different patrons to gain merit, with several inscriptions recording the donation of particular portions of a single cave. The later caves were each commissioned as a complete unit by a single patron from the local rulers or their court elites, again for merit in Buddhist afterlife beliefs as evidenced by inscriptions such as those in Cave 17. After the death of Harisena, smaller donors motivated by getting merit added small "shrinelets" between the caves or add statues to existing caves, and some two hundred of these "intrusive" additions were made in sculpture, with a further number of intrusive paintings, up to three hundred in cave 10 alone. Monasteries The majority of the caves are vihara halls with symmetrical square plans. To each vihara hall are attached smaller square dormitory cells cut into the walls. A vast majority of the caves were carved in the second period, wherein a shrine or sanctuary is appended at the rear of the cave, centred on a large statue of the Buddha, along with exuberantly detailed reliefs and deities near him as well as on the pillars and walls, all carved out of the natural rock. This change reflects the shift from Hinayana to Mahāyāna Buddhism. These caves are often called monasteries. The central square space of the interior of the viharas is defined by square columns forming a more-or-less square open area. Outside this are long rectangular aisles on each side, forming a kind of cloister. Along the side and rear walls are a number of small cells entered by a narrow doorway; these are roughly square, and have small niches on their back walls. Originally they had wooden doors. The centre of the rear wall has a larger shrine-room behind, containing a large Buddha statue. The viharas of the earlier period are much simpler, and lack shrines. Spink places the change to a design with a shrine to the middle of the second period, with many caves being adapted to add a shrine in mid-excavation, or after the original phase. The plan of Cave 1 shows one of the largest viharas, but is fairly typical of the later group. Many others, such as Cave 16, lack the vestibule to the shrine, which leads straight off the main hall. Cave 6 is two viharas, one above the other, connected by internal stairs, with sanctuaries on both levels. Worship halls The other type of main hall architecture is the narrower rectangular plan with high arched ceiling type chaitya-griha – literally, "the house of stupa". This hall is longitudinally divided into a nave and two narrower side aisles separated by a symmetrical row of pillars, with a stupa in the apse. The stupa is surrounded by pillars and concentric walking space for circumambulation. Some of the caves have elaborate carved entrances, some with large windows over the door to admit light. There is often a colonnaded porch or verandah, with another space inside the doors running the width of the cave. The oldest worship halls at Ajanta were built in the 2nd to 1st century BCE, the newest ones in the late 5th century CE, and the architecture of both resembles the architecture of a Christian church, but without the crossing or chapel chevette. The Ajanta Caves follow the Cathedral-style architecture found in still older rock-cut cave carvings of ancient India, such as the Lomas Rishi Cave of the Ajivikas near Gaya in Bihar dated to the 3rd century BCE. These chaitya-griha are called worship or prayer halls. The four completed chaitya halls are caves 9 and 10 from the early period, and caves 19 and 26 from the later period of construction. All follow the typical form found elsewhere, with high ceilings and a central "nave" leading to the stupa, which is near the back, but allows walking behind it, as walking around stupas was (and remains) a common element of Buddhist worship (pradakshina). The later two have high ribbed roofs carved into the rock, which reflect timber forms, and the earlier two are thought to have used actual timber ribs and are now smooth, the original wood presumed to have perished. The two later halls have a rather unusual arrangement (also found in Cave 10 at Ellora) where the stupa is fronted by a large relief sculpture of the Buddha, standing in Cave 19 and seated in Cave 26. Cave 29 is a late and very incomplete chaitya hall. The form of columns in the work of the first period is very plain and un-embellished, with both chaitya halls using simple octagonal columns, which were later painted with images of the Buddha, people and monks in robes. In the second period columns were far more varied and inventive, often changing profile over their height, and with elaborate carved capitals, often spreading wide. Many columns are carved over all their surface with floral motifs and Mahayana deities, some fluted and others carved with decoration all over, as in cave 1. Paintings Most of the Ajanta caves, and almost all the murals paintings date from nearly 600 years later, during a second phase of construction. The paintings in the Ajanta caves predominantly narrate the Jataka tales. These are Buddhist legends describing the previous births of the Buddha. These fables embed ancient morals and cultural lores that are also found in the fables and legends of Hindu and Jain texts. The Jataka tales are exemplified through the life example and sacrifices that the Buddha made in hundreds of his past incarnations, where he is depicted as having been reborn as an animal or human. Mural paintings survive from both the earlier and later groups of caves. Several fragments of murals preserved from the earlier caves (Caves 10 and 11) are effectively unique survivals of ancient painting in India from this period, and "show that by Sātavāhana times, if not earlier, the Indian painters had mastered an easy and fluent naturalistic style, dealing with large groups of people in a manner comparable to the reliefs of the Sāñcī toraņa crossbars". Some connections with the art of Gandhara can also be noted, and there is evidence of a shared artistic idiom. Four of the later caves have large and relatively well-preserved mural paintings which, states James Harle, "have come to represent Indian mural painting to the non-specialist", and represent "the great glories not only of Gupta but of all Indian art". They fall into two stylistic groups, with the most famous in Caves 16 and 17, and apparently later paintings in Caves 1 and 2. The latter group were thought to be a century or later than the others, but the revised chronology proposed by Spink would place them in the 5th century as well, perhaps contemporary with it in a more progressive style, or one reflecting a team from a different region. The Ajanta frescos are classical paintings and the work of confident artists, without cliches, rich and full. They are luxurious, sensuous and celebrate physical beauty, aspects that early Western observers felt were shockingly out of place in these caves presumed to be meant for religious worship and ascetic monastic life. The paintings are in "dry fresco", painted on top of a dry plaster surface rather than into wet plaster. All the paintings appear to be the work of painters supported by discriminating connoisseurship and sophisticated patrons from an urban atmosphere. We know from literary sources that painting was widely practised and appreciated in the Gupta period. Unlike much Indian mural painting, compositions are not laid out in horizontal bands like a frieze, but show large scenes spreading in all directions from a single figure or group at the centre. The ceilings are also painted with sophisticated and elaborate decorative motifs, many derived from sculpture. The paintings in cave 1, which according to Spink was commissioned by Harisena himself, concentrate on those Jataka tales which show previous lives of the Buddha as a king, rather than as deer or elephant or another Jataka animal. The scenes depict the Buddha as about to renounce the royal life. In general the later caves seem to have been painted on finished areas as excavating work continued elsewhere in the cave, as shown in caves 2 and 16 in particular. According to Spink's account of the chronology of the caves, the abandonment of work in 478 after a brief busy period accounts for the absence of painting in places including cave 4 and the shrine of cave 17, the later being plastered in preparation for paintings that were never done. Spink's chronology and cave history Walter Spink has over recent decades developed a very precise and circumstantial chronology for the second period of work on the site, which unlike earlier scholars, he places entirely in the 5th century. This is based on evidence such as the inscriptions and artistic style, dating of nearby cave temple sites, comparative chronology of the dynasties, combined with the many uncompleted elements of the caves. He believes the earlier group of caves, which like other scholars he dates only approximately, to the period "between 100 BCE – 100 CE", were at some later point completely abandoned and remained so "for over three centuries". This changed during the Hindu emperor Harishena of the Vakataka Dynasty, who reigned from 460 to his death in 477, who sponsored numerous new caves during his reign. Harisena's rule extended the Central Indian Vakataka Empire to include a stretch of the east coast of India; the Gupta Empire ruled northern India at the same period, and the Pallava dynasty much of the south. According to Spink, Harisena encouraged a group of associates, including his prime minister Varahadeva and Upendragupta, the sub-king in whose territory Ajanta was, to dig out new caves, which were individually commissioned, some containing inscriptions recording the donation. This activity began in many caves simultaneously about 462. This activity was mostly suspended in 468 because of threats from the neighbouring Asmaka kings. Thereafter work continued on only Caves 1, Harisena's own commission, and 17–20, commissioned by Upendragupta. In 472 the situation was such that work was suspended completely, in a period that Spink calls "the Hiatus", which lasted until about 475, by which time the Asmakas had replaced Upendragupta as the local rulers. Work was then resumed, but again disrupted by Harisena's death in 477, soon after which major excavation ceased, except at cave 26, which the Asmakas were sponsoring themselves. The Asmakas launched a revolt against Harisena's son, which brought about the end of the Vakataka Dynasty. In the years 478–480 CE major excavation by important patrons was replaced by a rash of "intrusions" – statues added to existing caves, and small shrines dotted about where there was space between them. These were commissioned by less powerful individuals, some monks, who had not previously been able to make additions to the large excavations of the rulers and courtiers. They were added to the facades, the return sides of the entrances, and to walls inside the caves. According to Spink, "After 480, not a single image was ever made again at the site". However, there exists a Rashtrakuta inscription outside of cave 26 dateable to end of seventh or early 8th century, suggesting the caves were not abandoned until then. Spink does not use "circa" in his dates, but says that "one should allow a margin of error of one year or perhaps even two in all cases". Hindu and Buddhist sponsorship The Ajanta Caves were built in a period when both the Buddha and the Hindu gods were simultaneously revered in Indian culture. According to Spink and other scholars, the royal Vakataka sponsors of the Ajanta Caves probably worshipped both Hindu and Buddhist gods. This is evidenced by inscriptions in which these rulers, who are otherwise known as Hindu devotees, made Buddhist dedications to the caves. According to Spink, A terracotta plaque of Mahishasuramardini, also known as Durga, was also found in a burnt-brick vihara monastery facing the caves on the right bank of the river Waghora that has been recently excavated. This suggest that the deity was possibly under worship by the artisans. According to Yuko Yokoschi and Walter Spink, the excavated artifacts of the 5th century near the site suggest that the Ajanta caves deployed a huge number of builders. Cave 1 Cave 1 was built on the eastern end of the horseshoe-shaped scarp and is now the first cave the visitor encounters. This cave, when first made, would have been in a less prominent position, right at the end of the row. According to Spink, it is one of the last caves to have been excavated, when the best sites had been taken, and was never fully inaugurated for worship by the dedication of the Buddha image in the central shrine. This is shown by the absence of sooty deposits from butter lamps on the base of the shrine image, and the lack of damage to the paintings that would have happened if the garland-hooks around the shrine had been in use for any period of time. Spink states that the Vākāṭaka Emperor Harishena was the benefactor of the work, and this is reflected in the emphasis on imagery of royalty in the cave, with those Jataka tales being selected that tell of those previous lives of the Buddha in which he was royal. The cliff has a steeper slope here than at other caves, so to achieve a tall grand facade it was necessary to cut far back into the slope, giving a large courtyard in front of the facade. There was originally a columned portico in front of the present facade, which can be seen "half-intact in the 1880s" in pictures of the site, but this fell down completely and the remains, despite containing fine carvings, were carelessly thrown down the slope into the river and lost. This cave (35.7 m × 27.6 m) has one of the most elaborate carved facades, with relief sculptures on entablature and ridges, and most surfaces embellished with decorative carving. There are scenes carved from the life of the Buddha as well as a number of decorative motifs. A two-pillared portico, visible in the 19th-century photographs, has since perished. The cave has a forecourt with cells fronted by pillared vestibules on either side. These have a high plinth level. The cave has a porch with simple cells at both ends. The absence of pillared vestibules on the ends suggests that the porch was not excavated in the latest phase of Ajanta when pillared vestibules had become customary. Most areas of the porch were once covered with murals, of which many fragments remain, especially on the ceiling. There are three doorways: a central doorway and two side doorways. Two square windows were carved between the doorways to brighten the interiors. Each wall of the hall inside is nearly long and high. Twelve pillars make a square colonnade inside, supporting the ceiling and creating spacious aisles along the walls. There is a shrine carved on the rear wall to house an impressive seated image of the Buddha, his hands being in the dharmachakrapravartana mudra. There are four cells on each of the left, rear, and the right walls, though due to rock fault there are none at the ends of the rear aisle. The paintings of Cave 1 cover the walls and the ceilings. They are in a fair state of preservation, although the full scheme was never completed. The scenes depicted are mostly didactic, devotional, and ornamental, with scenes from the chalukya corutstories about Persian ambassador in pulikeshin 2nd corut tells the relationship btw chalukya empire and Persian Empire Jataka stories of the Buddha's former lives as a bodhisattva, the life of the Gautama Buddha, and those of his veneration. The two most famous individual painted images at Ajanta are the two over-lifesize figures of the protective bodhisattvas Padmapani and Vajrapani on either side of the entrance to the Buddha shrine on the wall of the rear aisle (see illustrations above). Other significant frescoes in Cave 1 include the Sibi, Sankhapala, Mahajanaka, Mahaummagga, and Champeyya Jataka tales. The cave-paintings also show the Temptation of Mara, the miracle of Sravasti where the Buddha simultaneously manifests in many forms, the story of Nanda, and the story of Siddhartha and Yasodhara. Cave 2 Cave 2, adjacent to Cave 1, is known for the paintings that have been preserved on its walls, ceilings, and pillars. It looks similar to Cave 1 and is in a better state of preservation. This cave is best known for its feminine focus, intricate rock carvings and paint artwork yet it is incomplete and lacks consistency. One of the 5th-century frescos in this cave also shows children at a school, with those in the front rows paying attention to the teacher, while those in the back row are shown distracted and acting. Cave 2 (35.7 m × 21.6 m) was started in the 460s, but mostly carved between 475 and 477 CE, probably sponsored and influenced by a woman closely related to emperor Harisena. It has a porch quite different from Cave 1. Even the façade carvings seem to be different. The cave is supported by robust pillars, ornamented with designs. The front porch consists of cells supported by pillared vestibules on both ends. The hall has four colonnades which are supporting the ceiling and surrounding a square in the center of the hall. Each arm or colonnade of the square is parallel to the respective walls of the hall, making an aisle in between. The colonnades have rock-beams above and below them. The capitals are carved and painted with various decorative themes that include ornamental, human, animal, vegetative, and semi-divine motifs. Major carvings include that of goddess Hariti. She is a Buddhist deity who originally was the demoness of smallpox and a child eater, who the Buddha converted into a guardian goddess of fertility, easy child birth and one who protects babies. The paintings on the ceilings and walls of Cave 2 have been widely published. They depict the Hamsa, Vidhurapandita, Ruru, Kshanti Jataka tales and the Purna Avadhana. Other frescos show the miracle of Sravasti, Ashtabhaya Avalokitesvara and the dream of Maya. Just as the stories illustrated in cave 1 emphasise kingship, those in cave 2 show many noble and powerful women in prominent roles, leading to suggestions that the patron was an unknown woman. The porch's rear wall has a doorway in the center, which allows entrance to the hall. On either side of the door is a square-shaped window to brighten the interior. Cave 3 Cave 3 is merely a start of an excavation; according to Spink it was begun right at the end of the final period of work and soon abandoned. This is an incomplete monastery and only the preliminary excavations of pillared veranda exist. The cave was one of the last projects to start at the site. Its date could be ascribed to circa 477 CE, just before the sudden death of Emperor Harisena. The work stopped after the scooping out of a rough entrance of the hall. Cave 4 Cave 4, a Vihara, was sponsored by Mathura, likely not a noble or courtly official, rather a wealthy devotee. This is the largest vihara in the inaugural group, which suggests he had immense wealth and influence without being a state official. It is placed at a significantly higher level, possibly because the artists realized that the rock quality at the lower and same level of other caves was poor and they had a better chance of a major vihara at an upper location. Another likely possibility is that the planners wanted to carve into the rock another large cistern to the left courtside for more residents, mirroring the right, a plan implied by the height of the forward cells on the left side. The Archaeological Survey of India dates it to the 6th century CE. Spink, in contrast, dates this cave's inauguration a century earlier, to about 463 CE, based on construction style and other inscriptions. Cave 4 shows evidence of a dramatic collapse of its ceiling in the central hall, likely in the 6th century, something caused by the vastness of the cave and geological flaws in the rock. Later, the artists attempted to overcome this geological flaw by raising the height of the ceiling through deeper excavation of the embedded basalt lava. The cave has a squarish plan, houses a colossal image of the Buddha in preaching pose flanked by bodhisattvas and celestial nymphs hovering above. It consists, of a verandah, a hypostylar hall, sanctum with an antechamber and a series of unfinished cells. This monastery is the largest among the Ajanta caves and it measures nearly (35m × 28m). The door frame is exquisitely sculpted flanking to the right is carved Bodhisattva as reliever of Eight Great Perils. The rear wall of the verandah contains the panel of litany of Avalokiteśvara. The cave's ceiling collapse likely affected its overall plan, caused it being left incomplete. Only the Buddha's statue and the major sculptures were completed, and except for what the sponsor considered most important elements all other elements inside the cave were never painted. Cave 5 Cave 5, an unfinished excavation, was planned as a monastery (10.32 × 16.8 m). Cave 5 is devoid of sculpture and architectural elements except the door frame. The ornate carvings on the frame has female figures with mythical makara creatures found in ancient and medieval-era Indian arts. The cave's construction was likely initiated about 465 CE but abandoned because the rock has geological flaws. The construction was resumed in 475 CE after Asmakas restarted work at the Ajanta caves, but abandoned again as the artists and sponsor redesigned and focussed on an expanded Cave 6 that abuts Cave 5. Cave 6 Cave 6 is two-storey monastery (16.85 × 18.07 m). It consists of a sanctum, a hall on both levels. The lower level is pillared and has attached cells. The upper hall also has subsidiary cells. The sanctums on both level feature a Buddha in the teaching posture. Elsewhere, the Buddha is shown in different mudras. The lower level walls depict the Miracle of Sravasti and the Temptation of Mara legends. Only the lower floor of cave 6 was finished. The unfinished upper floor of cave 6 has many private votive sculptures, and a shrine Buddha. The lower level of Cave 6 likely was the earliest excavation in the second stage of construction. This stage marked the Mahayana theme and Vakataka renaissance period of Ajanta reconstruction that started about four centuries after the earlier Hinayana theme construction. The upper storey was not envisioned in the beginning, it was added as an afterthought, likely around the time when the architects and artists abandoned further work on the geologically-flawed rock of Cave 5 immediately next to it. Both lower and upper Cave 6 show crude experimentation and construction errors. The cave work was most likely in progress between 460 and 470 CE, and it is the first that shows attendant Bodhisattvas. The upper cave construction probably began in 465, progressed swiftly, and much deeper into the rock than the lower level. The walls and sanctum's door frame of the both levels are intricately carved. These show themes such as makaras and other mythical creatures, apsaras, elephants in different stages of activity, females in waving or welcoming gesture. The upper level of Cave 6 is significant in that it shows a devotee in a kneeling posture at the Buddha's feet, an indication of devotional worship practices by the 5th century. The colossal Buddha of the shrine has an elaborate throne back, but was hastily finished in 477/478 CE, when king Harisena died. The shrine antechamber of the cave features an unfinished sculptural group of the Six Buddhas of the Past, of which only five statues were carved. This idea may have been influenced from those in Bagh Caves of Madhya Pradesh. Cave 7 Cave 7 is also a monastery (15.55 × 31.25 m) but a single storey. It consists of a sanctum, a hall with octagonal pillars, and eight small rooms for monks. The sanctum Buddha is shown in preaching posture. There are many art panels narrating Buddhist themes, including those of the Buddha with Nagamuchalinda and Miracle of Sravasti. Cave 7 has a grand facade with two porticos. The veranda has eight pillars of two types. One has an octagonal base with amalaka and lotus capital. The other lacks a distinctly shaped base, features an octagonal shaft instead with a plain capital. The veranda opens into an antechamber. On the left side in this antechamber are seated or standing sculptures such as those of 25 carved seated Buddhas in various postures and facial expressions, while on the right side are 58 seated Buddha reliefs in different postures, all placed on lotus. These Buddhas and others on the inner walls of the antechamber are a sculptural depiction of the Miracle of Sravasti in Buddhist theology. The bottom row shows two Nagas (serpents with hoods) holding the blooming lotus stalk. The antechamber leads to the sanctum through a door frame. On this frame are carved two females standing on makaras (mythical sea creatures). Inside the sanctum is the Buddha sitting on a lion throne in cross legged posture, surrounded by other Bodhisattva figures, two attendants with chauris and flying apsaras above. Perhaps because of faults in the rock, Cave 7 was never taken very deep into the cliff. It consists only of the two porticos and a shrine room with antechamber, with no central hall. Some cells were fitted in. The cave artwork likely underwent revisions and refurbishments over time. The first version was complete by about 469 CE, the myriad Buddhas added and painted a few years later between 476 and 478 CE. Cave 8 Cave 8 is another unfinished monastery (15.24 × 24.64 m). For many decades in the 20th-century, this cave was used as a storage and generator room. It is at the river level with easy access, relatively lower than other caves, and according to Archaeological Survey of India it is possibly one of the earliest monasteries. Much of its front is damaged, likely from a landslide. The cave excavation proved difficult and probably abandoned after a geological fault consisting of a mineral layer proved disruptive to stable carvings. Spink, in contrast, states that Cave 8 is perhaps the earliest cave from the second period, its shrine an "afterthought". It may well be the oldest Mahayana monastery excavated in India, according to Spink. The statue may have been loose rather than carved from the living rock, as it has now vanished. The cave was painted, but only traces remain. Cave 9 Caves 9 and 10 are the two chaitya or worship halls from the 2nd to 1st century BCE – the first period of construction, though both were reworked upon the end of the second period of construction in the 5th century CE. Cave 9 (18.24 m × 8.04 m) is smaller than Cave 10 (30.5 m × 12.2 m), but more complex. This has led Spink to the view that Cave 10 was perhaps originally of the 1st century BCE, and cave 9 about a hundred years later. The small "shrinelets" called caves 9A to 9D and 10A also date from the second period. These were commissioned by individuals. Cave 9 arch has remnant profile that suggests that it likely had wooden fittings. The cave has a distinct apsidal shape, nave, aisle and an apse with an icon, architecture, and plan that reminds one of the cathedrals built in Europe many centuries later. The aisle has a row of 23 pillars. The ceiling is vaulted. The stupa is at the center of the apse, with a circumambulation path around it. The stupa sits on a high cylindrical base. On the left wall of the cave are votaries approaching the stupa, which suggests a devotional tradition. According to Spink, the paintings in this cave, including the intrusive standing Buddhas on the pillars, were added in the 5th century. Above the pillars and also behind the stupa are colorful paintings of the Buddha with Padmapani and Vajrapani next to him, they wear jewels and necklaces, while yogis, citizens and Buddhist bhikshu are shown approaching the Buddha with garlands and offerings, with men wearing dhoti and turbans wrapped around their heads. On the walls are friezes of Jataka tales, but likely from the Hinayana phase of early construction. Some of the panels and reliefs inside as well as outside Cave 10 do not make narrative sense, but are related to Buddhist legends. This lack of narrative flow may be because these were added by different monks and official donors in the 5th century wherever empty space was available. This devotionalism and the worship hall character of this cave is the likely reason why four additional shrinelets 9A, 9B, 9C, and 9D were added between Cave 9 and 10. Cave 10 Cave 10, a vast prayer hall or Chaitya, is dated to about the 1st century BCE, together with the nearby vihara cave No 12. These two caves are thus among the earliest of the Ajanta complex. It has a large central apsidal hall with a row of 39 octagonal pillars, a nave separating its aisle and stupa at the end for worship. The stupa has a pradakshina patha (circumambulatory path). This cave is significant because its scale confirms the influence of Buddhism in South Asia by the 1st century BCE and its continued though declining influence in India through the 5th century CE. Further, the cave includes a number of inscriptions where parts of the cave are "gifts of prasada" by different individuals, which in turn suggests that the cave was sponsored as a community effort rather than a single king or one elite official. Cave 10 is also historically important because in April 1819, a British Army officer John Smith saw its arch and introduced his discovery to the attention of the Western audience. Chronology Several others caves were also built in Western India around the same period under royal sponsorship. It is thought that the chronology of these early Chaitya Caves is as follows: first Cave 9 at Kondivite Caves and then Cave 12 at the Bhaja Caves, which both predate Cave 10 of Ajanta. Then, after Cave 10 of Ajanta, in chronological order: Cave 3 at Pitalkhora, Cave 1 at Kondana Caves, Cave 9 at Ajanta, which, with its more ornate designs, may have been built about a century later, Cave 18 at Nasik Caves, and Cave 7 at Bedse Caves, to finally culminate with the "final perfection" of the Great Chaitya at Karla Caves. Inscription Cave 10 features a Sanskrit inscription in Brahmi script that is archaeologically important. The inscription is the oldest of the Ajanta site, the Brahmi letters being paleographically dated to circa the 2nd century BCE. It reads: Paintings The paintings in cave 10 include some surviving from the early period, many from an incomplete programme of modernisation in the second period, and a very large number of smaller late intrusive images for votive purposes, around the 479–480 CE, nearly all Buddhas and many with donor inscriptions from individuals. These mostly avoided over-painting the "official" programme and after the best positions were used up are tucked away in less prominent positions not yet painted; the total of these (including those now lost) was probably over 300, and the hands of many different artists are visible. The paintings are numerous and from two periods, many narrating the Jataka tales in a clockwise sequence. Both Hinayana and Mahayana stage paintings are discernable, though the former are more faded and begrimed with early centuries of Hinayana worship. Of interest here is the Saddanta Jataka tale – the fable about six tusked elephant, and the Shyama Jataka – the story about the man who dedicates his life serving his blind parents. According to Stella Kramrisch, the oldest layer of the Cave 10 paintings date from about 100 BCE, and the principles behind their composition are analogous to those from the same era at Sanchi and Amaravati. Cave 11 Cave 11 is a monastery (19.87 × 17.35 m) built during c. 462 to 478. The cave veranda has pillars with octagonal shafts and square bases. The ceiling of the veranda shows evidence of floral designs and eroded reliefs. Only the center panel is discernible wherein the Buddha is seen with votaries lining up to pray before him. Inside, the cave consists of a hall with a long rock bench opening into six rooms. Similar stone benches are found in Nasik Caves. Another pillared verandah ends in a sanctum with seated Buddha against an incomplete stupa, and has four cells. The cave has a few paintings showing Bodhisattvas and the Buddha. Of these, the Padmapani, a couple gathered to pray, a pair of peafowl, and a female figure painting have survived in the best condition. The sanctum of this cave may be among the last structures built at Ajanta because it features a circumambulation path around the seated Buddha. Cave 12 According to Archaeological Survey of India (ASI), Cave 12 is an early stage Hinayana (Theravada) monastery (14.9 × 17.82 m) from the 2nd to 1st century BCE. Spink however only dates it to the 1st century BCE. The cave is damaged with its front wall completely collapsed. Its three sides inside have twelve cells, each with two stone beds. Cave 13 Cave 13 is another small monastery from the early period, consisting of a hall with seven cells, each also with two stone beds, all carved out of the rock. Each cell has rock-cut beds for the monks. In contrast to ASI's estimate, Gupte and Mahajan date both these caves about two to three centuries later, between 1st and 2nd-century CE. Cave 14 Cave 14 is another unfinished monastery (13.43 × 19.28 m) but carved above Cave 13. The entrance door frame shows sala bhanjikas. Cave 15 Cave 15 is a more complete monastery (19.62 × 15.98 m) with evidence that it had paintings. The cave consists of an eight-celled hall ending in a sanctum, an antechamber and a verandah with pillars. The reliefs show the Buddha, while the sanctum Buddha is shown seated in the Simhasana posture. Cave 15 door frame has carvings of pigeons eating grain. Cave 15A Cave 15A is the smallest cave with a hall and one cell on each side. Its entrance is just to the right of the elephant-decorated entrance to Cave 16. It is an ancient Hinayana cave with three cells opening around a minuscule central hall. The doors are decorated with a rail and arch pattern. It had an inscription in an ancient script, which has been lost. Cave 16 Cave 16 occupies a prime position near the middle of site, and was sponsored by Varahadeva, minister of Vakataka king Harishena (r. ). He was a follower of Buddhism. He devoted it to the community of monks, with an inscription that expresses his wish, may "the entire world (...) enter that peaceful and noble state free from sorrow and disease" and affirming his devotion to the Buddhist faith: "regarding the sacred law as his only companion, (he was) extremely devoted to the Buddha, the teacher of the world". He was, states Spink, probably someone who revered both the Buddha and the Hindu gods, as he proclaims his Hindu heritage in an inscription in the nearby Ghatotkacha Cave. The 7th-century Chinese traveler Xuan Zang described the cave as the entrance to the site. Cave 16 (19.5 m × 22.25 m × 4.6 m) influenced the architecture of the entire site. Spink and other scholars call it the "crucial cave" that helps trace the chronology of the second and closing stages of the entire cave's complex construction. Cave 16 is a Mahayana monastery and has the standard arrangement of a main doorway, two windows, and two aisle doorways. The veranda of this monastery is 19.5 m × 3 m, while the main hall is almost a perfect square with 19.5 m side. The paintings in Cave 16 are numerous. Narratives include various Jataka tales such as Hasti, Mahaummagga and the Sutasoma fables. Other frescos depict the conversion of Nanda, miracle of Sravasti, Sujata's offering, Asita's visit, the dream of Maya, the Trapusha and Bhallika story, and the ploughing festival. The Hasti Jataka frescos tell the story of a Bodhisattva elephant who learns of a large group of people starving, then tells them to go below a cliff where they could find food. The elephant proceeds to sacrifice himself by jumping off that cliff thereby becoming food so that the people can survive. These frescos are found immediately to the left of entrance, in the front corridor and the narrative follows a clockwise direction. The Mahaummagga Jataka frescos are found on the left wall of the corridor, which narrates the story of a child Bodhisattva. Thereafter, in the left corridor is the legend surrounding the conversion of Nanda – the half brother of the Buddha. The story depicted is one of the two major versions of the Nanda legend in the Buddhist tradition, one where Nanda wants to lead a sensuous life with the girl he had just wed and the Buddha takes him to heaven and later hell to show the spiritual dangers of a sensual life. After the Nanda-related frescos, the cave presents Manushi Buddhas, followed by flying votaries with offerings to worship the Buddha and the Buddha seated in teaching asana and dharma chakra mudra. The right wall of the corridor show the scenes from the life of the Buddha. These include Sujata offering food to the Buddha with a begging bowl in white dress, Tapussa and Bhalluka next to the Buddha after they offering wheat and honey to the Buddha as monk, the future Buddha sitting alone under a tree, and the Buddha at a ploughing festival. One mural shows Buddha's parents trying to dissuade him from becoming a monk. Another shows the Buddha at the palace surrounded by men in dhoti and women in sari as his behavior presents the four signs that he is likely to renounce. On this side of the corridor are also paintings that show the future Buddha as a baby with sage Asita with rishi-like looks. According to Spink, some of the Cave 16 paintings were left incomplete. Cave 17 Cave 17 (34.5 m × 25.63 m) along with Cave 16 with two great stone elephants at the entrance and Cave 26 with sleeping Buddha, were some of the many caves sponsored by the Hindu Vakataka prime minister Varahadeva. Cave 17 had additional donors such as the local king Upendragupta, as evidenced by the inscription therein. The cave features a large and most sophisticated vihara design, along with some of the best-preserved and well-known paintings of all the caves. While Cave 16 is known for depicting the life stories of the Buddha, the Cave 17 paintings has attracted much attention for extolling human virtues by narrating the Jataka tales. The narration includes attention to details and a realism which Stella Kramrisch calls "lavish elegance" accomplished by efficient craftsmen. The ancient artists, states Kramrisch, tried to show wind passing over a crop by showing it bending in waves, and a similar profusion of rhythmic sequences that unroll story after story, visually presenting the metaphysical. The Cave 17 monastery includes a colonnaded porch, a number of pillars each with a distinct style, a peristyle design for the interior hall, a shrine antechamber located deep in the cave, larger windows and doors for more light, along with extensive integrated carvings of Indian gods and goddesses. The hall of this monastery is a square, with 20 pillars. The grand scale of the carving also introduced errors of taking out too much rock to shape the walls, states Spink, which led to the cave being splayed out toward the rear. Cave 17 has one long inscription by king Upendragupta, in which he explains that he has "expended abundant wealth" on building this vihara, bringing much satisfaction to the devotees. Altogether, Upendragupta is known to have sponsored at least 5 of the caves in Ajanta. He may have spent too much wealth on religious pursuits however, as he was ultimately defeated by the attacks of the Asmaka. Cave 17 has thirty major murals. The paintings of Cave 17 depict Buddha in various forms and postures – Vipasyi, Sikhi, Visvbhu, Krakuchchanda, Kanakamuni, Kashyapa and Sakyamuni. Also depicted are Avalokitesvara, the story of Udayin and Gupta, the story of Nalagiri, the Wheel of life, a panel celebrating various ancient Indian musicians and a panel that tells of Prince Simhala's expedition to Sri Lanka. The narrative frescos depict the various Jataka tales such as the Shaddanta, Hasti, Hamsa, Vessantara, Sutasoma, Mahakapi (in two versions), Sarabhamiga, Machchha, Matiposaka, Shyama, Mahisha, Valahassa, Sibi, Ruru and Nigrodamiga Jatakas. The depictions weave in the norms of the early 1st millennium culture and the society. They show themes as diverse as a shipwreck, a princess applying makeup, lovers in scenes of dalliance, and a wine drinking scene of a couple with the woman and man amorously seated. Some frescos attempt to show the key characters from various parts of a Jataka tale by co-depicting animals and attendants in the same scene. Cave 18 Cave 18 is a small rectangular space (3.38 × 11.66 m) with two octagonal pillars and it joins into another cell. Its role is unclear. Cave 19 (5th century CE) Cave 19 is a worship hall (chaitya griha, 16.05 × 7.09 m) datable to the fifth century CE. The hall shows painted Buddha, depicted in different postures. This worship hall is now visited through what was previously a carved room. The presence of this room before the hall suggests that the original plan included a mandala style courtyard for devotees to gather and wait, an entrance and facade to this courtyard, all of whose ruins are now lost to history. Cave 19 is one of the caves known for its sculpture. It includes Naga figures with a serpent canopy protecting the Buddha, similar to those found for spiritual icons in the ancient Jain and Hindu traditions. It includes Yaksha dvarapala (guardian) images on the side of its vatayana (arches), flying couples, sitting Buddha, standing Buddhas and evidence that its ceiling was once painted. Cave 19 drew upon on the plan and experimentation in Cave 9. It made a major departure from the earlier Hinayana tradition, by carving a Buddha into the stupa, a decision that states Spink must have come from "the highest levels" in the 5th-century Mahayana Buddhist establishment because the king and dynasty that built this cave was from the Shaivism Hindu tradition. Cave 19 excavation and stupa was likely in place by 467 CE, and its finishing and artistic work continued into the early 470s, but it too was an incomplete cave when it was dedicated in 471 CE. The entrance facade of the Cave 19 worship hall is ornate. Two round pillars with fluted floral patterns and carved garlands support a porch. Its capital is an inverted lotus connecting to an amalaka. To its left is standing Buddha in varada hasta mudra with a devotee prostrating at his feet. On right is a relief of woman with one hand holding a pitcher and other touching her chin. Above is a seated Buddha in meditating mudra. Towards the right of the entrance is the "Mother and Child" sculpture. A figure with begging bowl is the Buddha, watching him are his wife and son. The worship hall is apsidal, with 15 pillars dividing it into two side aisles and one nave. The round pillars have floral reliefs and a fluted shaft topped with Buddha in its capitals. Next, to the Buddha in the capitals are elephants, horses and flying apsara friezes found elsewhere in India, reflecting the style of the Gupta Empire artwork. According to Sharma, the similarities at the Karla Caves Great Chaitya, built in the 2nd century CE, suggest that Cave 19 may have been modeled after it. The walls and the ceiling of the side aisles inside the worship hall are covered with paintings. These show the Buddha, flowers, and in the left aisle the "Mother and Child" legend again. Cave 20 Cave 20 is a monastery hall (16.2 × 17.91 m) from the 5th century. Its construction, states Spink, was started in the 460s by king Upendragupta, with his expressed desire "to make the great tree of religious merit grow". The work on Cave 20 was pursued in parallel with other caves. Cave 20 has exquisite detailing, states Spink, but it was relatively lower on priority than Caves 17 and 19. The work on Cave 20 was intermittently stopped and then continued in the following decade. The vihara consists of a sanctum, four cells for monks and a pillared verandah with two stone cut windows for light. Prior to entering the main hall, on the left of veranda are two Buddhas carved above the window and side cell. The ceiling of the main hall has remnants of painting. The sanctum Buddha is in preaching posture. The cave is known for the sculpture showing seven Buddhas with attendants on its lintel. The cave has a dedicatory Sanskrit inscription in Brahmi script in its verandah, and it calls the cave as a mandapa. Many of the figural and ornamental carvings in Cave 20 are similar to Cave 19, and to a lesser degree to those found in Cave 17. This may be because the same architects and artisans were responsible for the evolution of the three caves. The door frames in Cave 20 are quasi-structural, something unique at the Ajanta site. The decorations are also innovative in Cave 20, such as one showing the Buddha seated against two pillows and "a richly laden mango tree behind him", states Spink. Cave 21 Cave 21 is a hall (28.56 × 28.03 m) with twelve rock-cut rooms for monks, a sanctum, twelve pillared and pilastered verandah. The carvings on the pilaster include those of animals and flowers. The pillars feature reliefs of apsaras, Nagaraja and Nagarani, as well as devotees bowing with the Anjali mudra. The hall shows evidence that it used to be completely painted. The sanctum Buddha is shown in preaching posture. Cave 22 Cave 22 is a small vihara (12.72 × 11.58 m) with a narrow veranda and four unfinished cells. It is excavated at a higher level and has to be reached by a flight of steps. Inside, the Buddha is seated in pralamba-padasana. The painted figures in Cave 22 show Manushi-Buddhas with Maitreya. A pilaster on the left side of the Cave 22 veranda has a Sanskrit prose inscription. It is damaged in parts, and the legible parts state that this is a "meritorious gift of a mandapa by Jayata", calling Jayata's family as "a great Upasaka", and ending the inscription with "may the merit of this be for excellent knowledge to all sentient beings, beginning with father and mother". Cave 23 Cave 23 is also unfinished, consisting of a hall (28.32 × 22.52 m) but a design similar to Cave 21. The cave differs in its pillar decorations and the naga doorkeepers. Cave 24 Cave 24 is like Cave 21, unfinished but much larger. It features the second largest monastery hall (29.3 × 29.3 m) after Cave 4. The cave 24 monastery has been important to scholarly studies of the site because it shows how multiple crews of workers completed their objectives in parallel. The cell construction began as soon as the aisle had been excavated and while the main hall and sanctum were under construction. The construction of Cave 24 was planned in 467 CE, but likely started in 475 CE, with support from Buddhabhadra, then abruptly ended in 477 with the sponsor king Harisena's death. It is significant in having one of the most complex capitals on a pillar at the Ajanta site, an indication of how the artists excelled and continuously improved their sophistication as they worked with the rock inside the cave. The artists carved fourteen complex miniature figures on the central panel of the right center porch pillar, while working in dim light in a cramped cave space. The medallion reliefs in Cave 24 similarly show loving couples and anthropomorphic arts, rather than flowers of earlier construction. Cave 24's sanctum has a seated Buddha in pralamba-padasana. Cave 25 Cave 25 is a monastery. Its hall (11.37 × 12.24 m) is similar to other monasteries, but has no sanctum, includes an enclosed courtyard and is excavated at an upper level. Cave 26 (5th century CE) Cave 26 is a worship hall (chaityagriha, 25.34 × 11.52 m) similar in plan to Cave 19. It is much larger and with elements of a vihara design. An inscription states that a monk Buddhabhadra and his friend minister serving king of Asmaka gifted this vast cave. The inscription includes a vision statement and the aim to make "a memorial on the mountain that will endure for as long as the moon and the sun continue", translates Walter Spink. It is likely that the builders focussed on sculpture, rather than paintings, in Cave 26 because they believed stone sculpture will far more endure than paintings on the wall. The sculptures in Cave 26 are elaborate and more intricate. It is among the last caves excavated, and an inscription suggests late 5th or early 6th century according to ASI. The cave consists of an apsidal hall with side aisles for circumambulation (pradikshana). This path is full of carved Buddhist legends, three depictions of the Miracle of Sravasti in the right ambulatory side of the aisle, and seated Buddhas in various mudra. Many of these were added later by devotees, and therefore are intrusive to the aims of the original planners. The artwork begins on the wall of the aisle, immediately the left side of entrance. The major artworks include the Mahaparinirvana of Buddha (reclining Buddha) on the wall, followed by the legend called the "Temptations by Mara". The temptations include the seduction by Mara's daughters who are depicted below the meditating Buddha. They are shown scantly dressed and in seductive postures, while on both the left and right side of the Buddha are armies of Mara attempting to distract him with noise and threaten him with violence. In the top right corner is the image of a dejected Mara frustrated by his failure to disturb the resolve or focus of the ascetic Buddha. At the center of the apse is a rock-cut stupa. The stupa has an image of the Buddha on its front, 18 panels on its base, 18 panels above these, a three tiered torana above him, and apsaras are carved on the anda (hemispherical egg) stupa. On top of the dagoba is a nine-tiered harmika, a symbolism for the nine saṃsāra (Buddhism) heavens in Mahayana cosmology. The walls, pillars, brackets and the triforium are extensively carved with Buddhist themes. Many of the wall reliefs and images in this cave were badly damaged, and have been restored as a part of the site conservation efforts. Between cave 26 and its left wing, there is an inscription by a courtier of Rashtrakuta Nanaraj (who is mentioned in the Multai and Sangaloda plates), from late 7th or early 8th century. It is the last inscription in Ajanta. Cave 27 Cave 27 is a monastery and may have been planned as an attachment to Cave 26. Its two storeys are damaged, with the upper level partially collapsed. Its plan is similar to other monasteries. Cave 28 Cave 28 is an unfinished monastery, partially excavated, at the westernmost end of the Ajanta complex and barely accessible. Cave 29 Cave 29 an unfinished monastery at the highest level of the Ajanta complex, apparently unnoticed when the initial numbering system was established, and physically located between Caves 20 and 21. Cave 30 In 1956, a landslide covered the footpath leading to Cave 16. In the attempts to clear and restore the walkway, a small aperture and votive stupa were noticed in the debris by the workers, in a location near the stream bed. Further tracing and excavations led to a previously unknown Hinayana monastery cave dated to the 2nd and 1st century BCE. Cave 30 may actually be the oldest cave of the Ajanta complex. It is a 3.66 m × 3.66 m cave with three cells, each with two stone beds and stone pillows on the side of each cell. The cell door lintels show lotus and garland carvings. The cave has two inscriptions in an unknown script. It also has a platform on its veranda with a fine view of the river ravine below and the forest cover. According to Gupte and Mahajan, this cave may have been closed at some point with large carefully carved pieces as it distracted the entrance view of Cave 16. Other infrastructure Over 80% of the Ajanta caves were vihara (temporary traveler residences, monasteries). The designers and artisans who built these caves included facilities for collecting donations and storing grains and food for the visitors and monks. Many of the caves include large repositories cut into the floor. The largest storage spaces are found, states Spink, in the "very commodious recesses in the shrines of both Ajanta Cave Lower 6 and Cave 11". These caves were probably chosen because of their relative convenience and the security they offered due to their higher level. The choice of integrating covered vaults cut into the floor may have been driven by the need to provide sleeping space and logistical ease. Recent excavations A burnt-brick vihara monastery facing the caves on the right bank of the river Waghora has been recently excavated. It has a number of cells facing a central courtyard, in which a stupa was established. A coin of the Western Satraps ruler Visvasena (ruled 293–304 CE) as well as a gold coin of the Byzantine Emperor Theodosius II (ruled 402-450 CE) were found in the excavations, giving further numismatic confirmation for the dating of the caves. A terracotta plaque of Mahishasuramardini was also found, which was possibly under worship by the artisans. Copies of the paintings The paintings have deteriorated significantly since they were rediscovered, and a number of 19th-century copies and drawings are important for a complete understanding of the works. A number of attempts to copy the Ajanta paintings began in the 19th century for European and Japanese museums. Some of these works have later been lost in natural and fire disasters. In 1846 for example, Major Robert Gill, an Army officer from Madras Presidency and a painter, was appointed by the Royal Asiatic Society to make copies of the frescos on the cave walls. Gill worked on his painting at the site from 1844 to 1863. He made 27 copies of large sections of murals, but all but four were destroyed in a fire at the Crystal Palace in London in 1866, where they were on display. Gill returned to the site, and recommenced his labours, replicating the murals until his death in 1875. Another attempt was made in 1872 when the Bombay Presidency commissioned John Griffiths to work with his students to make copies of Ajanta paintings, again for shipping to England. They worked on this for thirteen years and some 300 canvases were produced, many of which were displayed at the Imperial Institute on Exhibition Road in London, one of the forerunners of the Victoria and Albert Museum. But in 1885 another fire destroyed over a hundred of the paintings in storage in a wing of the museum. The V&A still has 166 paintings surviving from both sets, though none have been on permanent display since 1955. The largest are some . A conservation project was undertaken on about half of them in 2006, also involving the University of Northumbria. Griffith and his students had painted many of the paintings with "cheap varnish" in order to make them easier to see, which has added to the deterioration of the originals, as has, according to Spink and others, recent cleaning by the ASI. A further set of copies were made between 1909 and 1911 by Christiana Herringham (Lady Herringham) and a group of students from the Calcutta School of Art that included the future Indian Modernist painter Nandalal Bose. The copies were published in full colour as the first publication of London's fledgling India Society. More than the earlier copies, these aimed to fill in holes and damage to recreate the original condition rather than record the state of the paintings as she was seeing them. According to one writer, unlike the paintings created by her predecessors Griffiths and Gill, whose copies were influenced by British Victorian styles of painting, those of the Herringham expedition preferred an 'Indian Renascence' aesthetic of the type pioneered by Abanindranath Tagore. Early photographic surveys were made by Robert Gill, whose photos, including some using stereoscopy, were used in books by him and Fergusson (many are available online from the British Library), then Victor Goloubew in 1911 and E.L. Vassey, who took the photos in the four volume study of the caves by Ghulam Yazdani (published 1930–1955). Some slightly creative copies of Ajanta frescos, especially the painting of the Adoration of the Buddha from the shrine antechamber of Cave 17, were commissioned by Thomas Holbein Hendley (1847–1917) for the decoration of the walls of the hall of the Albert Hall Museum, Jaipur, India. He had the work painted by a local artist variously named Murli or Murali. The museum was opened to the public in 1887. This work is otherwise presented as characteristic of the end of the 19th century. Another attempt to make copies of the murals was made by the Japanese artist Arai Kampō (荒井寛方:1878–1945) after being invited by Rabindranath Tagore to India to teach Japanese painting techniques. He worked on making copies with tracings on Japanese paper from 1916 to 1918 and his work was conserved at Tokyo Imperial University until the materials perished during the 1923 Great Kantō earthquake. Significance Natives, society and culture in the arts at Ajanta The Ajanta cave arts are a window into the culture, society and religiosity of the native population of India between the 2nd century BCE and 5th century CE. Different scholars have variously interpreted them from the perspective of gender studies, history, sociology, and the anthropology of South Asia. The dress, the jewelry, the gender relations, the social activities depicted showcase at least a lifestyle of the royalty and elite, and in others definitely the costumes of the common man, monks and rishi depicted therein. They shine "light on life in India" around mid 1st millennium CE. The Ajanta artworks provide a contrast between the spiritual life of monks who had given up all materialistic possessions versus the sensual life of those it considered materialistic, luxurious, symbols of wealth, leisurely and high fashion. Many frescos show scenes from shops, festivals, jesters at processions, palaces and performance art pavilions. These friezes share themes and details of those found in Bharhut, Sanchi, Amaravati, Ellora, Bagh, Aihole, Badami and other archaeological sites in India. Ajanta caves contributes to visual and descriptive sense of the ancient and early medieval Indian culture and artistic traditions, particularly those around the Gupta Empire era period. The early colonial era description of Ajanta caves was largely orientalist and critical, inconsistent with the Victorian values and stereotyping. According to William Dalrymple, the themes and arts in the Ajanta caves were puzzling to the 19th-century Orientalists. Lacking the Asian cultural heritage and with no knowledge of Jataka Tales or equivalent Indian fables, they could not comprehend it. They projected their own views and assumptions, calling it something that lacks reason and rationale, something that is meaningless crude representation of royalty and foreigners with mysticism and sensuousness. The 19th-century views and interpretations of the Ajanta Caves were conditioned by ideas and assumptions in the colonial mind, saw what they wanted to see. To many who are unaware of the premises of Indian religions in general, and Buddhism in particular, the significance of Ajanta Caves has been like rest of Indian art. According to Richard Cohen, Ajanta Caves to them has been yet another example of "worship this stock, or that stone, or monstrous idol". In contrast, to the Indian mind and the larger Buddhist community, it is everything that art ought to be, the religious and the secular, the spiritual and the social fused to enlightened perfection. According to Walter Spink – one of the most respected Art historians on Ajanta, these caves were by 475 CE a much-revered site to the Indians, with throngs of "travelers, pilgrims, monks and traders". The site was vastly transformed into its current form in just 20 years, between early 460 CE to early 480 CE, by regional architects and artisans. This accomplishment, states Spink, makes Ajanta, "one of the most remarkable creative achievements in man's history". Foreigners in the paintings of Ajanta The Ajanta Caves painting are a significant source of socio-economic information in ancient India, particularly in relation to the interactions of India with foreign cultures at the time most of the paintings were made, in the 5th century CE (Common Era). According to Indian historian Haroon Khan Sherwani: "The paintings at Ajanta clearly demonstrate the cosmopolitan character of Buddhism, which opened its way to men of all races, Greek, Persian, Saka, Pahlava, Kushan and Huna". Depictions of foreigners abound: according to Spink, "Ajanta's paintings are filled with such foreign types." They have sometimes been a source of misinterpretation as in the so-called "Persian Embassy Scene". These foreigners may reflect the Sassanian merchants, visitors and the flourishing trade routes of the day. The so-called "Persian Embassy Scene" Cave 1, for example, shows a mural fresco with characters with foreigner faces or dresses, the so-called "Persian Embassy Scene". This scene is located at the right of the entrance door upon entering the hall. According to Spink, James Fergusson, a 19th-century architectural historian, had decided that this scene corresponded to the Persian ambassador in 625 CE to the court of the Hindu Chalukya king Pulakeshin II. An alternate theory has been that the fresco represents a Hindu ambassador visiting the Persian king Khusrau II in 625 CE, a theory that Fergusson disagreed with. These assumptions by colonial British era art historians, state Spink and other scholars, has been responsible for wrongly dating this painting to the 7th century, when in fact this reflects an incomplete Harisena-era painting of a Jataka tale (the Mahasudarsana jataka, in which the enthroned king is actually the Buddha in one of his previous lives as King) with the representation of trade between India and distant lands such as Sassanian near East that was common by the 5th century. International trade, growth of Buddhism Cave 1 has several frescos with characters with foreigners' faces or dresses. Similar depictions are found in the paintings of Cave 17. Such murals, states Pia Brancaccio, suggest a prosperous and multicultural society in 5th-century India active in international trade. These also suggest that this trade was economically important enough to the Deccan region that the artists chose to include it with precision. Additional evidence of international trade includes the use of the blue lapis lazuli pigment to depict foreigners in the Ajanta paintings, which must have been imported from Afghanistan or Iran. It also suggests, states Branacaccio, that the Buddhist monastic world was closely connected with trading guilds and the court culture in this period. A small number of scenes show foreigners drinking wine in Caves 1 and 2. Some show foreign Near East kings with wine and their retinue which presumably add to the "general regal emphasis" of the cave. According to Brancaccio, the Ajanta paintings show a variety of colorful, delicate textiles and women making cotton. Textile probably was one of the major exports to foreign lands, along with gems. These were exported first through the Red Sea, and later through the Persian Gulf, thereby bringing a period of economic and cultural exchange between the Indians, the Sasanian Empire and the Persian merchants before Islam was founded in the Arabian peninsula. While scholars generally agree that these murals confirm trade and cultural connections between India and Sassanian west, their specific significance and interpretation varies. Brancaccio, for example, suggests that the ship and jars in them probably reflect foreign ships carrying wine imported to India. In contrast, Schlinghoff interprets the jars to be holding water, and ships shown as Indian ships used in international trade. Similar depictions are found in the paintings of Cave 17, but this time in direct relation to the worship of the Buddha. In Cave 17, a painting of the Buddha descending from the Trayastrimsa Heaven shows he being attended by many foreigners. Many foreigners in this painting are thus shown as listeners to the Buddhist Dharma. The ethnic diversity is depicted in the painting in the clothes (kaftans, Sasanian helmets, round caps), hairdos and skin colors. In the Visvantara Jataka of Cave 17, according to Brancaccio, the scene probably shows a servant from Central Asia holding a foreign metal ewer, while a dark-complexioned servant holds a cup to an amorous couple. In another painting in Cave 17, relating to the conversion of Nanda, a man possibly from northeast Africa appears as a servant. These representations show, states Brancaccio, that the artists were familiar with people of Sogdia, Central Asia, Persia and possibly East Africa. Another hypothesis is offered by Upadhya, who states that the artists who built Ajanta caves "very probably included foreigners". Impact on later painting and other arts The Ajanta paintings, or more likely the general style they come from, influenced painting in Tibet and Sri Lanka. Some influences from Ajanta have also suggested in the Kizil Caves of the Tarim Basin, in particular in early caves such as the Peacock Cave. The rediscovery of ancient Indian paintings at Ajanta provided Indian artists with examples from ancient India to follow. Nandalal Bose experimented with techniques to follow the ancient style which allowed him to develop his unique style. Abanindranath Tagore and Syed Thajudeen also used the Ajanta paintings for inspiration. Anna Pavlova's ballet Ajanta's Frescoes was inspired by her visit to Ajanta, choreographed by Ivan Clustine, with music by Nikolai Tcherepnin (one report says Mikhail Fokine in 1923). and premiered at Covent Garden in 1923. Jewish American poet Muriel Rukeyser wrote about the caves in "Ajanta," the opening poem of her third collection Beast in View (1944). Rukeyser was inspired in part by writings on the caves by artist Mukul Dey in 1925 and art historian Stella Kramrisch in 1937. See also Cetiya Bedse Caves Bhaja Caves Dambulla cave temple Kanheri Caves Karla Caves Mogao Caves Nasik Caves Pitalkhora Caves Shivneri Caves List of colossal sculptures in situ Notes References Bibliography "ASI": Archaeological Survey of India website, with a concise entry on the Caves, accessed 20 October 2010 Burgess, James and Fergusson J. Cave Temples of India. (London: W.H. Allen & Co., 1880. Delhi: Munshiram Manoharlal Publishers, 2005). Burgess, James and Indraji, Bhagwanlal. Inscriptions from the Cave Temples of Western India, Archaeological Survey of Western India, Memoirs, 10 (Bombay: Government Central Press, 1881). Burgess, James. Buddhist Cave Temples and Their Inscriptions, Archaeological Survey of Western India, 4 (London: Trubner & Co., 1883; Varanasi: Indological Book House, 1964). Burgess, James. "Notes on the Bauddha Rock Temples of Ajanta, Their Paintings and Sculptures," Archaeological Survey of Western India, 9 (Bombay: Government Central Press, 1879). Behl, Benoy K. The Ajanta Caves (London: Thames & Hudson, 1998. New York: Harry N. Abrams, 1998). . Cohen, Richard S. "Nāga, Yaksinī, Buddha: Local Deities and Local Buddhism at Ajanta," History of Religions. 37/4 (May 1998): 360–400. Cohen, Richard S. "Problems in the Writing of Ajanta's History: The Epigraphic Evidence," Indo-Iranian Journal. 40/2 (April 1997): 125–48. Cohen, Richard S. Setting the Three Jewels: The Complex Culture of Buddhism at the Ajanta Caves. A PhD dissertation (Asian Languages and Cultures: Buddhist Studies, University of Michigan, 1995). Cowell, E.B. The Jataka, I-VI (Cambridge: Cambridge, 1895; reprint, 1907). Dhavalikar, M.K. Late Hinayana Caves of Western India (Pune: 1984). Griffiths, J. Paintings in the Buddhist Cave Temples of Ajanta, 2 vols. (London: 1896–1897). Halder, Asit Kumar. "AJANTA" Edited and annotated by Prasenjit Dasgupta and Soumen Paul, with a foreword by Gautam Halder LALMATI. Kolkata. 2009 Kramrisch, Stella. A Survey of Painting in the Deccan (Calcutta and London: The India Society in co-operation with the Dept. of Archaeology, 1937). Reproduced: "Ajanta," Exploring India's Sacred Art: Selected Writings of Stella Kramrisch, ed. Miller, Barbara Stoler (Philadelphia: University of Pennsylvania Press: 1983), pp. 273–307; reprint (New Delhi: Indira Gandhi National Centre for the Arts, 1994), pp. 273–307. Majumdar, R.C. and A.S. Altekar, eds. The Vakataka-Gupta Age. New History of Indian People Series, VI (Benares: Motilal Banarasidass, 1946; reprint, Delhi: 1960). Mirashi, V.V. "Historical Evidence in Dandin's Dasakumaracharita," Annals of the Bhandarkar Oriental Research Institute, 24 (1945), 20ff. Reproduced: Studies in Indology, 1 (Nagpur: Vidarbha Samshodhan Mandal, 1960), pp. 164–77. Mirashi, V.V. Inscription of the Vakatakas. Corpus Inscriptionum Indicarum Series, 5 (Ootacamund: Government Epigraphist for India, 1963). Mirashi, V.V. The Ghatotkacha Cave Inscriptions with a Note on Ghatotkacha Cave Temples by Srinivasachar, P. (Hyderabad: Archaeological Department, 1952). Mirashi, V.V. Vakataka inscription in Cave XVI at Ajanta. Hyderabad Archaeological Series, 14 (Calcutta: Baptist mission Press for the Archaeological Department of His Highness the Nizam's Dominions, 1941). Mitra, Debala. Ajanta, 8th ed. (Delhi: Archaeological Survey of India, 1980). Nagaraju, S. Buddhist Architecture of Western India (Delhi: 1981) Parimoo, Ratan; et al. The Art of Ajanta: New Perspectives, 2 vols (New Delhi: Books & Books, 1991). Schlingloff, Dieter. Guide to the Ajanta Paintings, vol. 1; Narrative Wall Paintings (Delhi: Munshiram Manoharlal Publishers Pvt. Ltd., 1999) Schlingloff, Dieter. Studies in the Ajanta Paintings: Identifications and Interpretations (New Delhi: 1987). Shastri, Ajay Mitra, ed. The Age of the Vakatakas (New Delhi: Harman, 1992). Singh, Rajesh Kumar. An Introduction to the Ajanta Caves (Baroda: Hari Sena Press, 2012). Singh, Rajesh Kumar. 'The Early Development of the Cave 26-Complex at Ajanta,' South Asian Studies (London: March 2012), vol. 28, No. 1, pp. 37–68. Singh, Rajesh Kumar. 'Buddhabhadra's Dedicatory Inscription at Ajanta: A Review,' in Pratnakirti: Recent Studies in Indian Epigraphy, History, Archaeology, and Art, 2 vols, Professor Shrinivas S. Ritti Felicitation volume, ed. by Shriniwas V. Padigar and Shivanand V (Delhi: Agam Kala Prakashan, 2012), vol. 1, pp. 34–46. Singh, Rajesh Kumar, et al. Ajanta: Digital Encyclopaedia [CD-Rom] (New Delhi: Indira Gandhi National Centre for Arts, 2005). Singh, Rajesh Kumar. "Enumerating the Sailagrhas of Ajanta," Journal of the Asiatic Society of Mumbai 82, 2009: 122–26. Singh, Rajesh Kumar. "Ajanta: Cave 8 Revisited," Jnana-Pravah Research Journal 12, 2009: 68–80. Singh, Rajesh Kumar. "Some Problems in Fixing the Date of Ajanta Caves," Kala, the Journal of Indian Art History Congress 17, 2008: 69–85. Spink, Walter M. "A Reconstruction of Events related to the development of Vakataka caves," C.S. Sivaramamurti felicitation volume, ed. M.S. Nagaraja Rao (New Delhi: 1987). Spink, Walter M. "Ajanta's Chronology: Cave 1's Patronage," Chhavi 2, ed. Krishna, Anand (Benares: Bharat Kala Bhawan, 1981), pp. 144–57. Spink, Walter M. "Ajanta's Chronology: Cave 7's Twice-born Buddha," Studies in Buddhist Art of South Asia, ed. Narain, A.K. (New Delhi: 1985), pp. 103–16. Spink, Walter M. "Ajanta's Chronology: Politics and Patronage," Kaladarsana, ed. Williams, Joanna (New Delhi: 1981), pp. 109–26. Spink, Walter M. "Ajanta's Chronology: The Crucial Cave," Ars Orientalis, 10 (1975), pp. 143–169. Spink, Walter M. "Ajanta's Chronology: The Problem of Cave 11," Ars Orientalis, 7 (1968), pp. 155–168. Spink, Walter M. "Ajanta's Paintings: A Checklist for their Dating," Dimensions of Indian Art, Pupul Jayakar Felicitation Volume, ed. Chandra, Lokesh; and Jain, Jyotindra (Delhi: Agam Kala Prakashan, 1987), p. 457. Spink, Walter M. "Notes on Buddha Images," The Art of Ajanta: New Perspectives, vol. 2, ed. Parimoo, Ratan, et al. (New Delhi: Books & Books, 1991), pp. 213–41. Spink, Walter M. "The Achievement of Ajanta," The Age of the Vakatakas, ed. Shastri, Ajaya Mitra (New Delhi: Harman Publishing House, 1992), pp. 177–202. Spink, Walter M. "The Vakataka's Flowering and Fall," The Art of Ajanta: New Perspectives, vol. 2, ed. Parimoo, Ratan, et al. (New Delhi: Books & Books, 1991), pp. 71–99. Spink, Walter M. "The Archaeology of Ajanta," Ars Orientalis, 21, pp. 67–94. Weiner, Sheila L. Ajanta: Its Place in Buddhist Art (Berkeley and Los Angeles: University of California Press, 1977). Yazdani, Gulam. Ajanta: the Colour and Monochrome Reproductions of the Ajanta Frescos Based on Photography, 4 vols. (London: Oxford University Press, 1930 [31?], 1955). Yazdani, Gulam. The Early History of the Deccan, Parts 7–9 (Oxford: 1960). Zin, Monika. Guide to the Ajanta Paintings, vol. 2; Devotional and Ornamental Paintings (Delhi: Munshiram Manoharlal Publishers Pvt. Ltd., 2003) External links Ajanta Caves Bibliography, Akira Shimada (2014), Oxford University Press The Early Development of the Cave 26-Complex at Ajanta The Greatest Ancient Picture Gallery. William Dalrymple, New York Review of Books (23 Oct 2014) Ajanta Caves in UNESCO List Google Streetview Tours of each Cave of Ajanta Inscriptions with Translations: Ajanta Caves, Richard Cohen 2nd-century BC establishments 1819 archaeological discoveries Architecture in India Indian art Indian painting Buddhist pilgrimage sites in India Caves of Maharashtra World Heritage Sites in Maharashtra Caves containing pictograms in India Former populated places in India Tourist attractions in Aurangabad district, Maharashtra Indian rock-cut architecture Buddhist caves in India Buddhist paintings Gupta art Indian Buddhist sculpture World Heritage Sites in India Vakataka dynasty
2703
https://en.wikipedia.org/wiki/Aberration%20%28astronomy%29
Aberration (astronomy)
In astronomy, aberration (also referred to as astronomical aberration, stellar aberration, or velocity aberration) is a phenomenon where celestial objects exhibit an apparent motion about their true positions based on the velocity of the observer: It causes objects to appear to be displaced towards the observer's direction of motion. The change in angle is of the order of v/c where c is the speed of light and v the velocity of the observer. In the case of "stellar" or "annual" aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth's velocity changes as it revolves around the Sun, by a maximum angle of approximately 20 arcseconds in right ascension or declination. The term aberration has historically been used to refer to a number of related phenomena concerning the propagation of light in moving bodies. Aberration is distinct from parallax, which is a change in the apparent position of a relatively nearby object, as measured by a moving observer, relative to more distant objects that define a reference frame. The amount of parallax depends on the distance of the object from the observer, whereas aberration does not. Aberration is also related to light-time correction and relativistic beaming, although it is often considered separately from these effects. Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of special relativity. It was first observed in the late 1600s by astronomers searching for stellar parallax in order to confirm the heliocentric model of the Solar System. However, it was not understood at the time to be a different phenomenon. In 1727, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley's theory was incompatible with 19th-century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrik Lorentz's aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz's elaboration of Maxwell's electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of special relativity in 1905, which presents a general form of the equation for aberration in terms of such theory. Explanation Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to consider the apparent direction of falling rain. If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed. The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer's frame. This effect is sometimes called the "searchlight" or "headlight" effect. In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth's moving frame is tilted relative to the angle observed in the Sun's frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun. While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes observable even at the classical level (see history). The theory of special relativity is required to correctly account for aberration. The relativistic explanation is very similar to the classical one however, and in both theories aberration may be understood as a case of addition of velocities. Classical explanation In the Sun's frame, consider a beam of light with velocity equal to the speed of light c, with x and y velocity components and , and thus at an angle θ such that . If the Earth is moving at velocity in the x direction relative to the Sun, then by velocity addition the x component of the beam's velocity in the Earth's frame of reference is , and the y velocity is unchanged, . Thus the angle of the light in the Earth's frame in terms of the angle in the Sun's frame is In the case of , this result reduces to , which in the limit may be approximated by . Relativistic explanation The reasoning in the relativistic case is the same except that the relativistic velocity addition formulas must be used, which can be derived from Lorentz transformations between different frames of reference. These formulas are where , giving the components of the light beam in the Earth's frame in terms of the components in the Sun's frame. The angle of the beam in the Earth's frame is thus In the case of , this result reduces to , and in the limit this may be approximated by . This relativistic derivation keeps the speed of light constant in all frames of reference, unlike the classical derivation above. Relationship to light-time correction and relativistic beaming Aberration is related to two other phenomena, light-time correction, which is due to the motion of an observed object during the time taken by its light to reach an observer, and relativistic beaming, which is an angling of the light emitted by a moving light source. It can be considered equivalent to them but in a different inertial frame of reference. In aberration, the observer is considered to be moving relative to a (for the sake of simplicity) stationary light source, while in light-time correction and relativistic beaming the light source is considered to be moving relative to a stationary observer. Consider the case of an observer and a light source moving relative to each other at constant velocity, with a light beam moving from the source to the observer. At the moment of emission, the beam in the observer's rest frame is tilted compared to the one in the source's rest frame, as understood through relativistic beaming. During the time it takes the light beam to reach the observer the light source moves in the observer's frame, and the 'true position' of the light source is displaced relative to the apparent position the observer sees, as explained by light-time correction. Finally, the beam in the observer's frame at the moment of observation is tilted compared to the beam in source's frame, which can be understood as an aberrational effect. Thus, a person in the light source's frame would describe the apparent tilting of the beam in terms of aberration, while a person in the observer's frame would describe it as a light-time effect. The relationship between these phenomena is only valid if the observer and source's frames are inertial frames. In practice, because the Earth is not an inertial rest frame but experiences centripetal acceleration towards the Sun, many aberrational effects such as annual aberration on Earth cannot be considered light-time corrections. However, if the time between emission and detection of the light is short compared to the orbital period of the Earth, the Earth may be approximated as an inertial frame and aberrational effects are equivalent to light-time corrections. Types The Astronomical Almanac describes several different types of aberration, arising from differing components of the Earth's and observed object's motion: Stellar aberration: "The apparent angular displacement of the observed position of a celestial body resulting from the motion of the observer. Stellar aberration is divided into diurnal, annual, and secular components." Annual aberration: "The component of stellar aberration resulting from the motion of the Earth about the Sun." Diurnal aberration: "The component of stellar aberration resulting from the observer's diurnal motion about the center of the Earth due to the Earth's rotation." Secular aberration: "The component of stellar aberration resulting from the essentially uniform and almost rectilinear motion of the entire solar system in space. Secular aberration is usually disregarded." Planetary aberration: "The apparent angular displacement of the observed position of a solar system body from its instantaneous geocentric direction as would be seen by an observer at the geocenter. This displacement is caused by the aberration of light and light-time displacement." Annual aberration Annual aberration is caused by the motion of an observer on Earth as the planet revolves around the Sun. Due to orbital eccentricity, the orbital velocity of Earth (in the Sun's rest frame) varies periodically during the year as the planet traverses its elliptic orbit and consequently the aberration also varies periodically, typically causing stars to appear to move in small ellipses. Approximating Earth's orbit as circular, the maximum displacement of a star due to annual aberration is known as the constant of aberration, conventionally represented by . It may be calculated using the relation substituting the Earth's average speed in the Sun's frame for and the speed of light . Its accepted value is 20.49552 arcseconds (sec) or 0.000099365 radians (rad) (at J2000). Assuming a circular orbit, annual aberration causes stars exactly on the ecliptic (the plane of Earth's orbit) to appear to move back and forth along a straight line, varying by on either side of their position in the Sun's frame. A star that is precisely at one of the ecliptic poles (at 90° from the ecliptic plane) will appear to move in a circle of radius about its true position, and stars at intermediate ecliptic latitudes will appear to move along a small ellipse. For illustration, consider a star at the northern ecliptic pole viewed by an observer at a point on the Arctic Circle. Such an observer will see the star transit at the zenith, once every day (strictly speaking sidereal day). At the time of the March equinox, Earth's orbit carries the observer in a southwards direction, and the star's apparent declination is therefore displaced to the south by an angle of . On the September equinox, the star's position is displaced to the north by an equal and opposite amount. On either solstice, the displacement in declination is 0. Conversely, the amount of displacement in right ascension is 0 on either equinox and at maximum on either solstice. In actuality, Earth's orbit is slightly elliptic rather than circular, and its speed varies somewhat over the course of its orbit, which means the description above is only approximate. Aberration is more accurately calculated using Earth's instantaneous velocity relative to the barycenter of the Solar System. Note that the displacement due to aberration is orthogonal to any displacement due to parallax. If parallax is detectable, the maximum displacement to the south would occur in December, and the maximum displacement to the north in June. It is this apparently anomalous motion that so mystified early astronomers. Solar annual aberration A special case of annual aberration is the nearly constant deflection of the Sun from its position in the Sun's rest frame by towards the west (as viewed from Earth), opposite to the apparent motion of the Sun along the ecliptic (which is from west to east, as seen from Earth). The deflection thus makes the Sun appear to be behind (or retarded) from its rest-frame position on the ecliptic by a position or angle . This deflection may equivalently be described as a light-time effect due to motion of the Earth during the 8.3 minutes that it takes light to travel from the Sun to Earth. The relation with is : [0.000099365 rad / 2 π rad] x [365.25 d x 24 h/d x 60 min/h] = 8.3167 min ≈ 8 min 19 sec = 499 sec. This is possible since the transit time of sunlight is short relative to the orbital period of the Earth, so the Earth's frame may be approximated as inertial. In the Earth's frame, the Sun moves, at a mean velocity v = 29.789 km/s, by a distance ≈ 14,864.7 km in the time it takes light to reach Earth, ≈ 499 sec for the orbit of mean radius = 1 AU = 149,597,870.7 km. This gives an angular correction ≈ 0.000099364 rad = 20.49539 sec, which can be solved to give ≈ 0.000099365 rad = 20.49559 sec, very nearly the same as the aberrational correction (here is in radian and not in arcsecond). Diurnal aberration Diurnal aberration is caused by the velocity of the observer on the surface of the rotating Earth. It is therefore dependent not only on the time of the observation, but also the latitude and longitude of the observer. Its effect is much smaller than that of annual aberration, and is only 0.32 arcseconds in the case of an observer at the Equator, where the rotational velocity is greatest. Secular aberration The secular component of aberration, caused by the motion of the Solar System in space, has been further subdivided into several components: aberration resulting from the motion of the solar system barycenter around the center of our Galaxy, aberration resulting from the motion of the Galaxy relative to the Local Group, and aberration resulting from the motion of the Local Group relative to the cosmic microwave background. Secular aberration affects the apparent positions of stars and extragalactic objects. The large, constant part of secular aberration cannot be directly observed and "It has been standard practice to absorb this large, nearly constant effect into the reported" positions of stars. In about 200 million years, the Sun circles the galactic center, whose measured location is near right ascension (α = 266.4°) and declination (δ = −29.0°). The constant, unobservable, effect of the solar system's motion around the galactic center has been computed variously as 150 or 165 arcseconds. The other, observable, part is an acceleration toward the galactic center of approximately 2.5 × 10−10 m/s2, which yields a change of aberration of about 5 µas/yr. Highly precise measurements extending over several years can observe this change in secular aberration, often called the secular aberration drift or the acceleration of the Solar System, as a small apparent proper motion. Recently, highly precise astrometry of extragalactic objects using both Very Long Baseline Interferometry and the Gaia space observatory have successfully measured this small effect. The first VLBI measurement of the apparent motion, over a period of 20 years, of 555 extragalactic objects towards the center of our galaxy at equatorial coordinates of α = 263° and δ = −20° indicated a secular aberration drift 6.4 ±1.5 μas/yr. Later determinations using a series of VLBI measurements extending over almost 40 years determined the secular aberration drift to be 5.83 ± 0.23 μas/yr in the direction α = 270.2 ± 2.3° and δ = −20.2° ± 3.6°. Optical observations using only 33 months of Gaia satellite data of 1.6 million extragalactic sources indicated an acceleration of the solar system of 2.32 ± 0.16 × 10−10 m/s2 and a corresponding secular aberration drift of 5.05 ± 0.35 µas/yr in the direction of α = 269.1° ± 5.4°, δ = −31.6° ± 4.1°. It is expected that later Gaia data releases, incorporating about 66 and 120 months of data, will reduce the random errors of these results by factors of 0.35 and 0.15. The latest edition of the International Celestial Reference Frame (ICRF3) adopted a recommended galactocentric aberration constant of 5.8 µas/yr and recommended a correction for secular aberration to obtain the highest positional accuracy for times other than the reference epoch 2015.0. Planetary aberration Planetary aberration is the combination of the aberration of light (due to Earth's velocity) and light-time correction (due to the object's motion and distance), as calculated in the rest frame of the Solar System. Both are determined at the instant when the moving object's light reaches the moving observer on Earth. It is so called because it is usually applied to planets and other objects in the Solar System whose motion and distance are accurately known. Discovery and first observations The discovery of the aberration of light was totally unexpected, and it was only by considerable perseverance and perspicacity that Bradley was able to explain it in 1727. It originated from attempts to discover whether stars possessed appreciable parallaxes. Search for stellar parallax The Copernican heliocentric theory of the Solar System had received confirmation by the observations of Galileo and Tycho Brahe and the mathematical investigations of Kepler and Newton. As early as 1573, Thomas Digges had suggested that parallactic shifting of the stars should occur according to the heliocentric model, and consequently if stellar parallax could be observed it would help confirm this theory. Many observers claimed to have determined such parallaxes, but Tycho Brahe and Giovanni Battista Riccioli concluded that they existed only in the minds of the observers, and were due to instrumental and personal errors. However, in 1680 Jean Picard, in his Voyage d’Uranibourg, stated, as a result of ten years' observations, that Polaris, the Pole Star, exhibited variations in its position amounting to 40″ annually. Some astronomers endeavoured to explain this by parallax, but these attempts failed because the motion differed from that which parallax would produce. John Flamsteed, from measurements made in 1689 and succeeding years with his mural quadrant, similarly concluded that the declination of Polaris was 40″ less in July than in September. Robert Hooke, in 1674, published his observations of γ Draconis, a star of magnitude 2m which passes practically overhead at the latitude of London (hence its observations are largely free from the complex corrections due to atmospheric refraction), and concluded that this star was 23″ more northerly in July than in October. James Bradley's observations Consequently, when Bradley and Samuel Molyneux entered this sphere of research in 1725, there was still considerable uncertainty as to whether stellar parallaxes had been observed or not, and it was with the intention of definitely answering this question that they erected a large telescope at Molyneux's house at Kew. They decided to reinvestigate the motion of γ Draconis with a telescope constructed by George Graham (1675–1751), a celebrated instrument-maker. This was fixed to a vertical chimney stack in such manner as to permit a small oscillation of the eyepiece, the amount of which (i.e. the deviation from the vertical) was regulated and measured by the introduction of a screw and a plumb line. The instrument was set up in November 1725, and observations on γ Draconis were made starting in December. The star was observed to move 40″ southwards between September and March, and then reversed its course from March to September. At the same time, 35 Camelopardalis, a star with a right ascension nearly exactly opposite to that of γ Draconis, was 19" more northerly at the beginning of March than in September. These results were completely unexpected and inexplicable by existing theories. Early hypotheses Bradley and Molyneux discussed several hypotheses in the hope of finding the solution. Since the apparent motion was evidently caused neither by parallax nor observational errors, Bradley first hypothesized that it could be due to oscillations in the orientation of the Earth's axis relative to the celestial sphere – a phenomenon known as nutation. 35 Camelopardalis was seen to possess an apparent motion which could be consistent with nutation, but since its declination varied only one half as much as that of γ Draconis, it was obvious that nutation did not supply the answer (however, Bradley later went on to discover that the Earth does indeed nutate). He also investigated the possibility that the motion was due to an irregular distribution of the Earth's atmosphere, thus involving abnormal variations in the refractive index, but again obtained negative results. On August 19, 1727, Bradley embarked upon a further series of observations using a telescope of his own erected at the Rectory, Wanstead. This instrument had the advantage of a larger field of view and he was able to obtain precise positions of a large number of stars over the course of about twenty years. During his first two years at Wanstead, he established the existence of the phenomenon of aberration beyond all doubt, and this also enabled him to formulate a set of rules that would allow the calculation of the effect on any given star at a specified date. Development of the theory of aberration Bradley eventually developed his explanation of aberration in about September 1728 and this theory was presented to the Royal Society in mid January the following year. One well-known story was that he saw the change of direction of a wind vane on a boat on the Thames, caused not by an alteration of the wind itself, but by a change of course of the boat relative to the wind direction. However, there is no record of this incident in Bradley's own account of the discovery, and it may therefore be apocryphal. The following table shows the magnitude of deviation from true declination for γ Draconis and the direction, on the planes of the solstitial colure and ecliptic prime meridian, of the tangent of the velocity of the Earth in its orbit for each of the four months where the extremes are found, as well as expected deviation from true ecliptic longitude if Bradley had measured its deviation from right ascension: Bradley proposed that the aberration of light not only affected declination, but right ascension as well, so that a star in the pole of the ecliptic would describe a little ellipse with a diameter of about 40", but for simplicity, he assumed it to be a circle. Since he only observed the deviation in declination, and not in right ascension, his calculations for the maximum deviation of a star in the pole of the ecliptic are for its declination only, which will coincide with the diameter of the little circle described by such star. For eight different stars, his calculations are as follows: Based on these calculations, Bradley was able to estimate the constant of aberration at 20.2", which is equal to 0.00009793 radians, and with this was able to estimate the speed of light at per second. By projecting the little circle for a star in the pole of the ecliptic, he could simplify the calculation of the relationship between the speed of light and the speed of the Earth's annual motion in its orbit as follows: Thus, the speed of light to the speed of the Earth's annual motion in its orbit is 10,210 to one, from whence it would follow, that light moves, or is propagated as far as from the Sun to the Earth in 8 minutes 12 seconds. The original motivation of the search for stellar parallax was to test the Copernican theory that the Earth revolves around the Sun. The change of aberration in the course of the year demonstrates the relative motion of the Earth and the stars. Retrodiction on Descartes' lightspeed argument In the prior century, René Descartes argued that if light were not instantaneous, then shadows of moving objects would lag; and if propagation times over terrestrial distances were appreciable, then during a lunar eclipse the Sun, Earth, and Moon would be out of alignment by hours' motion, contrary to observation. Huygens commented that, on Rømer's lightspeed data (yielding an earth-moon round-trip time of only seconds), the lag angle would be imperceptible. What they both overlooked is that aberration (as understood only later) would exactly counteract the lag even if large, leaving this eclipse method completely insensitive to light speed. (Otherwise, shadow-lag methods could be made to sense absolute translational motion, contrary to a basic principle of relativity.) Historical theories of aberration The phenomenon of aberration became a driving force for many physical theories during the 200 years between its observation and the explanation by Albert Einstein. The first classical explanation was provided in 1729, by James Bradley as described above, who attributed it to the finite speed of light and the motion of Earth in its orbit around the Sun. However, this explanation proved inaccurate once the wave nature of light was better understood, and correcting it became a major goal of the 19th century theories of luminiferous aether. Augustin-Jean Fresnel proposed a correction due to the motion of a medium (the aether) through which light propagated, known as "partial aether drag". He proposed that objects partially drag the aether along with them as they move, and this became the accepted explanation for aberration for some time. George Stokes proposed a similar theory, explaining that aberration occurs due to the flow of aether induced by the motion of the Earth. Accumulated evidence against these explanations, combined with new understanding of the electromagnetic nature of light, led Hendrik Lorentz to develop an electron theory which featured an immobile aether, and he explained that objects contract in length as they move through the aether. Motivated by these previous theories, Albert Einstein then developed the theory of special relativity in 1905, which provides the modern account of aberration. Bradley's classical explanation Bradley conceived of an explanation in terms of a corpuscular theory of light in which light is made of particles. His classical explanation appeals to the motion of the earth relative to a beam of light-particles moving at a finite velocity, and is developed in the Sun's frame of reference, unlike the classical derivation given above. Consider the case where a distant star is motionless relative to the Sun, and the star is extremely far away, so that parallax may be ignored. In the rest frame of the Sun, this means light from the star travels in parallel paths to the Earth observer, and arrives at the same angle regardless of where the Earth is in its orbit. Suppose the star is observed on Earth with a telescope, idealized as a narrow tube. The light enters the tube from the star at angle and travels at speed taking a time to reach the bottom of the tube, where it is detected. Suppose observations are made from Earth, which is moving with a speed . During the transit of the light, the tube moves a distance . Consequently, for the particles of light to reach the bottom of the tube, the tube must be inclined at an angle different from , resulting in an apparent position of the star at angle . As the Earth proceeds in its orbit it changes direction, so changes with the time of year the observation is made. The apparent angle and true angle are related using trigonometry as: . In the case of , this gives . While this is different from the more accurate relativistic result described above, in the limit of small angle and low velocity they are approximately the same, within the error of the measurements of Bradley's day. These results allowed Bradley to make one of the earliest measurements of the speed of light. Luminiferous aether In the early nineteenth century the wave theory of light was being rediscovered, and in 1804 Thomas Young adapted Bradley's explanation for corpuscular light to wavelike light traveling through a medium known as the luminiferous aether. His reasoning was the same as Bradley's, but it required that this medium be immobile in the Sun's reference frame and must pass through the earth unaffected, otherwise the medium (and therefore the light) would move along with the earth and no aberration would be observed. He wrote: However, it soon became clear Young's theory could not account for aberration when materials with a non-vacuum index of refraction were present. An important example is of a telescope filled with water. The velocity of the light in such a telescope will be slower than in vacuum, and is given by rather than where is the index of refraction of the water. Thus, by Bradley and Young's reasoning the aberration angle is given by . which predicts a medium-dependent angle of aberration. When refraction at the telescope's objective is taken into account this result deviates even more from the vacuum result. In 1810 François Arago performed a similar experiment and found that the aberration was unaffected by the medium in the telescope, providing solid evidence against Young's theory. This experiment was subsequently verified by many others in the following decades, most accurately by Airy in 1871, with the same result. Aether drag models Fresnel's aether drag In 1818, Augustin Fresnel developed a modified explanation to account for the water telescope and for other aberration phenomena. He explained that the aether is generally at rest in the Sun's frame of reference, but objects partially drag the aether along with them as they move. That is, the aether in an object of index of refraction moving at velocity is partially dragged with a velocity bringing the light along with it. This factor is known as "Fresnel's dragging coefficient". This dragging effect, along with refraction at the telescope's objective, compensates for the slower speed of light in the water telescope in Bradley's explanation. With this modification Fresnel obtained Bradley's vacuum result even for non-vacuum telescopes, and was also able to predict many other phenomena related to the propagation of light in moving bodies. Fresnel's dragging coefficient became the dominant explanation of aberration for the next decades. Stokes' aether drag However, the fact that light is polarized (discovered by Fresnel himself) led scientists such as Cauchy and Green to believe that the aether was a totally immobile elastic solid as opposed to Fresnel's fluid aether. There was thus renewed need for an explanation of aberration consistent both with Fresnel's predictions (and Arago's observations) as well as polarization. In 1845, Stokes proposed a 'putty-like' aether which acts as a liquid on large scales but as a solid on small scales, thus supporting both the transverse vibrations required for polarized light and the aether flow required to explain aberration. Making only the assumptions that the fluid is irrotational and that the boundary conditions of the flow are such that the aether has zero velocity far from the Earth, but moves at the Earth's velocity at its surface and within it, he was able to completely account for aberration. The velocity of the aether outside of the Earth would decrease as a function of distance from the Earth so light rays from stars would be progressively dragged as they approached the surface of the Earth. The Earth's motion would be unaffected by the aether due to D'Alembert's paradox. Both Fresnel and Stokes' theories were popular. However, the question of aberration was put aside during much of the second half of the 19th century as focus of inquiry turned to the electromagnetic properties of aether. Lorentz' length contraction In the 1880s once electromagnetism was better understood, interest turned again to the problem of aberration. By this time flaws were known to both Fresnel's and Stokes' theories. Fresnel's theory required that the relative velocity of aether and matter to be different for light of different colors, and it was shown that the boundary conditions Stokes had assumed in his theory were inconsistent with his assumption of irrotational flow. At the same time, the modern theories of electromagnetic aether could not account for aberration at all. Many scientists such as Maxwell, Heaviside and Hertz unsuccessfully attempted to solve these problems by incorporating either Fresnel or Stokes' theories into Maxwell's new electromagnetic laws. Hendrik Lorentz spent considerable effort along these lines. After working on this problem for a decade, the issues with Stokes' theory caused him to abandon it and to follow Fresnel's suggestion of a (mostly) stationary aether (1892, 1895). However, in Lorentz's model the aether was completely immobile, like the electromagnetic aethers of Cauchy, Green and Maxwell and unlike Fresnel's aether. He obtained Fresnel's dragging coefficient from modifications of Maxwell's electromagnetic theory, including a modification of the time coordinates in moving frames ("local time"). In order to explain the Michelson–Morley experiment (1887), which apparently contradicted both Fresnel's and Lorentz's immobile aether theories, and apparently confirmed Stokes' complete aether drag, Lorentz theorized (1892) that objects undergo "length contraction" by a factor of in the direction of their motion through the aether. In this way, aberration (and all related optical phenomena) can be accounted for in the context of an immobile aether. Lorentz' theory became the basis for much research in the next decade, and beyond. Its predictions for aberration are identical to those of the relativistic theory. Special relativity Lorentz' theory matched experiment well, but it was complicated and made many unsubstantiated physical assumptions about the microscopic nature of electromagnetic media. In his 1905 theory of special relativity, Albert Einstein reinterpreted the results of Lorentz' theory in a much simpler and more natural conceptual framework which disposed of the idea of an aether. His derivation is given above, and is now the accepted explanation. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of aberration: Other important motivations for Einstein's development of relativity were the moving magnet and conductor problem and (indirectly) the negative aether drift experiments, already mentioned by him in the introduction of his first relativity paper. Einstein wrote in a note in 1952: While Einstein's result is the same as Bradley's original equation except for an extra factor of , Bradley's result does not merely give the classical limit of the relativistic case, in the sense that it gives incorrect predictions even at low relative velocities. Bradley's explanation cannot account for situations such as the water telescope, nor for many other optical effects (such as interference) that might occur within the telescope. This is because in the Earth's frame it predicts that the direction of propagation of the light beam in the telescope is not normal to the wavefronts of the beam, in contradiction with Maxwell's theory of electromagnetism. It also does not preserve the speed of light c between frames. However, Bradley did correctly infer that the effect was due to relative velocities. See also Apparent place Stellar parallax Astronomical nutation Proper motion Timeline of electromagnetism and classical optics Relativistic aberration Notes References Further reading P. Kenneth Seidelmann (Ed.), Explanatory Supplement to the Astronomical Almanac (University Science Books, 1992), 127–135, 700. Stephen Peter Rigaud, Miscellaneous Works and Correspondence of the Rev. James Bradley, D.D. F.R.S. (1832). Charles Hutton, Mathematical and Philosophical Dictionary (1795). H. H. Turner, Astronomical Discovery (1904). Thomas Simpson, Essays on Several Curious and Useful Subjects in Speculative and Mix'd Mathematicks (1740). :de:August Ludwig Busch, Reduction of the Observations Made by Bradley at Kew and Wansted to Determine the Quantities of Aberration and Nutation (1838). External links Courtney Seligman on Bradley's observations Electromagnetic radiation Astrometry Radiation
2726
https://en.wikipedia.org/wiki/Atlas%20Autocode
Atlas Autocode
Atlas Autocode (AA) is a programming language developed around 1963 at the University of Manchester. A variant of the language ALGOL, it was developed by Tony Brooker and Derrick Morris for the Atlas computer. The initial AA and AB compilers were written by Jeff Rohl and Tony Brooker using the Brooker-Morris Compiler-compiler, with a later hand-coded non-CC implementation (ABC) by Jeff Rohl. The word Autocode was basically an early term for programming language. Different autocodes could vary greatly. Features AA was a block structured language that featured explicitly typed variables, subroutines, and functions. It omitted some ALGOL features such as passing parameters by name, which in ALGOL 60 means passing the memory address of a short subroutine (a thunk) to recalculate a parameter each time it is mentioned. The AA compiler could generate range-checking for array accesses, and allowed an array to have dimensions that were determined at runtime, i.e., an array could be declared as integer array Thing (i:j), where i and j were calculated values. AA high-level routines could include machine code, either to make an inner loop more efficient or to effect some operation which otherwise cannot be done easily. AA included a complex data type to represent complex numbers, partly because of pressure from the electrical engineering department, as complex numbers are used to represent the behavior of alternating current. The imaginary unit square root of -1 was represented by i, which was treated as a fixed complex constant = i. The complex data type was dropped when Atlas Autocode later evolved into the language Edinburgh IMP. IMP was an extension of AA and was used to write the Edinburgh Multiple Access System (EMAS) operating system. AA's second-greatest claim to fame (after being the progenitor of IMP and EMAS) was that it had many of the features of the original Compiler Compiler. A variant of the AA compiler included run-time support for a top-down recursive descent parser. The style of parser used in the Compiler Compiler was in use continuously at Edinburgh from the 60's until almost the year 2000. Other Autocodes were developed for the Titan computer, a prototype Atlas 2 at Cambridge, and the Ferranti Mercury. Syntax Atlas Autocode's syntax was largely similar to ALGOL, though it was influenced by the output device which the author had available, a Friden Flexowriter. Thus, it allowed symbols like ½ for .5 and the superscript 2 for to the power of 2. The Flexowriter supported overstriking and thus, AA did also: up to three characters could be overstruck as a single symbol. For example, the character set had no ↑ symbol, so exponentiation was an overstrike of | and *. The aforementioned underlining of reserved words (keywords) could also be done using overstriking. The language is described in detail in the Atlas Autocode Reference Manual. Other Flexowriter characters that were found a use in AA were: α in floating-point numbers, e.g., 3.56α-7 for modern 3.56e-7 ; β to mean the second half of a 48-bit Atlas memory word; π for the mathematical constant pi. When AA was ported to the English Electric KDF9 computer, the character set was changed to International Organization for Standardization (ISO). That compiler has been recovered from an old paper tape by the Edinburgh Computer History Project and is available online, as is a high-quality scan of the original Edinburgh version of the Atlas Autocode manual. Keywords in AA were distinguishable from other text by being underlined, which was implemented via overstrike in the Flexowriter (compare to bold in ALGOL). There were also two stropping regimes. First, there was an "uppercasedelimiters" mode where all uppercase letters (outside strings) were treated as underlined lowercase. Second, in some versions (but not in the original Atlas version), it was possible to strop keywords by placing a "%" sign in front of them, for example the keyword endofprogramme could be typed as %end %of %programme or %endofprogramme. This significantly reduced typing, due to only needing one character, rather than overstriking the whole keyword. As in ALGOL, there were no reserved words in the language as keywords were identified by underlining (or stropping), not by recognising reserved character sequences. In the statement if token=if then result = token, there is both a keyword if and a variable named if. As in ALGOL, AA allowed spaces in variable names, such as integer previous value. Spaces were not significant and were removed before parsing in a trivial pre-lexing stage called "line reconstruction". What the compiler would see in the above example would be "iftoken=ifthenresult=token". Spaces were possible due partly to keywords being distinguished in other ways, and partly because the source was processed by scannerless parsing, without a separate lexing phase, which allowed the lexical syntax to be context-sensitive. The syntax for expressions let the multiplication operator be omitted, e.g., 3a was treated as 3*a, and a(i+j) was treated as a*(i+j) if a was not an array. In ambiguous uses, the longest possible name was taken (maximal munch), for example ab was not treated as a*b, whether or not a and b had been declared. References External links The main features of Atlas Autocode, By R. A. Brooker, J. S. Rohl, and S. R. Clark The Atlas Autocode Mini-Manual by W. F. Lunnon, G. Riding (July 1965) Atlas Autocode Reference Manual by R.A. Brooker, J.S.Rohl (March 1965) Mercury Autocode, Atlas Autocode and some Associated Matters. by Vic Forrington (Jan 2014) Flowcharts for Atlas Autocode compiler on KDF9. Ferranti History of computing in the United Kingdom Structured programming languages
2732
https://en.wikipedia.org/wiki/Au%20file%20format
Au file format
The Au file format is a simple audio file format introduced by Sun Microsystems. The format was common on NeXT systems and on early Web pages. Originally it was headerless, being simply 8-bit μ-law-encoded data at an 8000 Hz sample rate. Hardware from other vendors often used sample rates as high as 8192 Hz, often integer multiples of video clock signal frequencies. Newer files have a header that consists of six unsigned 32-bit words, an optional information chunk which is always of non-zero size, and then the data (in big-endian format). Although the format now supports many audio encoding formats, it remains associated with the μ-law logarithmic encoding. This encoding was native to the SPARCstation 1 hardware, where SunOS exposed the encoding to application programs through the /dev/audio device file interface. This encoding and interface became a de facto standard for Unix sound. New format All fields are stored in big-endian format, including the sample data. The type of encoding depends on the value of the "encoding" field (word 3 of the header). Formats 2 through 7 are uncompressed linear PCM, therefore technically lossless (although not necessarily free of quantization error, especially in 8-bit form). Formats 1 and 27 are μ-law and A-law, respectively, both companding logarithmic representations of PCM, and arguably lossy as they pack what would otherwise be almost 16 bits of dynamic range into 8 bits of encoded data, even though this is achieved by an altered dynamic response and no data is actually "thrown away". Formats 23 through 26 are ADPCM, which is an early form of lossy compression, usually but not always with 4 bits of encoded data per audio sample (for 4:1 efficiency with 16-bit input, or 2:1 with 8-bit; equivalent to e.g. encoding CD quality MP3 at a 352kbit rate using a low quality encoder). Several of the others (number 8 through 22) are DSP commands or data, designed to be processed by the NeXT Music Kit software. Note: PCM formats are encoded as signed data (as opposed to unsigned). The current format supports only a single audio data segment per file. The variable-length annotation field is currently ignored by most audio applications. References External links Oracle man pages: audio(7i) - generic audio device interface (for information on the /dev/audio interface) Computer file formats Digital container formats Audio codecs
2733
https://en.wikipedia.org/wiki/April%2025
April 25
Events Pre-1600 404 BC – Admiral Lysander and King Pausanias of Sparta blockade Athens and bring the Peloponnesian War to a successful conclusion. 775 – The Battle of Bagrevand puts an end to an Armenian rebellion against the Abbasid Caliphate. Muslim control over the South Caucasus is solidified and its Islamization begins, while several major Armenian nakharar families lose power and their remnants flee to the Byzantine Empire. 799 – After mistreatment and disfigurement by the citizens of Rome, Pope Leo III flees to the Frankish court of king Charlemagne at Paderborn for protection. 1134 – The name Zagreb was mentioned for the first time in the Felician Charter relating to the establishment of the Zagreb Bishopric around 1094. 1601–1900 1607 – Eighty Years' War: The Dutch fleet destroys the anchored Spanish fleet at Gibraltar. 1644 – Transition from Ming to Qing: The Chongzhen Emperor, the last Emperor of Ming China, commits suicide during a peasant rebellion led by Li Zicheng. 1707 – A coalition of Britain, the Netherlands and Portugal is defeated by a Franco-Spanish army at Almansa (Spain) in the War of the Spanish Succession. 1792 – Highwayman Nicolas J. Pelletier becomes the first person executed by guillotine. 1792 – "La Marseillaise" (the French national anthem) is composed by Claude Joseph Rouget de Lisle. 1829 – Charles Fremantle arrives in HMS Challenger off the coast of modern-day Western Australia prior to declaring the Swan River Colony for the British Empire. 1846 – Thornton Affair: Open conflict begins over the disputed border of Texas, triggering the Mexican–American War. 1849 – The Governor General of Canada, Lord Elgin, signs the Rebellion Losses Bill, outraging Montreal's English population and triggering the Montreal Riots. 1859 – British and French engineers break ground for the Suez Canal. 1862 – American Civil War: Forces under U.S. Admiral David Farragut demand the surrender of the Confederate city of New Orleans, Louisiana. 1864 – American Civil War: In the Battle of Marks' Mills, a force of 8,000 Confederate soldiers attacks 1,800 Union soldiers and a large number of wagon teamsters, killing or wounding 1,500 Union combatants. 1882 – French and Vietnamese troops clashed in Tonkin, when Commandant Henri Rivière seized the citadel of Hanoi with a small force of marine infantry. 1898 – Spanish–American War: The United States Congress declares that a state of war between the U.S. and Spain has existed since April 21, when an American naval blockade of the Spanish colony of Cuba began. 1901–present 1901 – New York becomes the first U.S. state to require automobile license plates. 1915 – World War I: The Battle of Gallipoli begins: The invasion of the Turkish Gallipoli Peninsula by British, French, Indian, Newfoundland, Australian and New Zealand troops, begins with landings at Anzac Cove and Cape Helles. 1916 – Anzac Day is commemorated for the first time on the first anniversary of the landing at ANZAC Cove. 1920 – At the San Remo conference, the principal Allied Powers of World War I adopt a resolution to determine the allocation of Class "A" League of Nations mandates for administration of the former Ottoman-ruled lands of the Middle East. 1933 – Nazi Germany issues the Law Against Overcrowding in Schools and Universities limiting the number of Jewish students able to attend public schools and universities. 1938 – U.S. Supreme Court delivers its opinion in Erie Railroad Co. v. Tompkins and overturns a century of federal common law. 1944 – The United Negro College Fund is incorporated. 1945 – World War II: United States and Soviet reconnaissance troops meet in Torgau and Strehla along the River Elbe, cutting the Wehrmacht of Nazi Germany in two. This would be later known as Elbe Day. 1945 – World War II: Liberation Day (Italy): The National Liberation Committee for Northern Italy calls for a general uprising against the German occupation and the Italian Social Republic. 1945 – United Nations Conference on International Organization: Founding negotiations for the United Nations begin in San Francisco. 1945 – World War II: The last German troops retreat from Finnish soil in Lapland, ending the Lapland War. Military actions of the Second World War end in Finland. 1951 – Korean War: Assaulting Chinese forces are forced to withdraw after heavy fighting with UN forces, primarily made up of Australian and Canadian troops, at the Battle of Kapyong. 1953 – Francis Crick and James Watson publish "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" describing the double helix structure of DNA. 1954 – The first practical solar cell is publicly demonstrated by Bell Telephone Laboratories. 1959 – The Saint Lawrence Seaway, linking the North American Great Lakes and the Atlantic Ocean, officially opens to shipping. 1960 – The United States Navy submarine completes the first submerged circumnavigation of the globe. 1961 – Robert Noyce is granted a patent for an integrated circuit. 1972 – Vietnam War: Nguyen Hue Offensive: The North Vietnamese 320th Division forces 5,000 South Vietnamese troops to retreat and traps about 2,500 others northwest of Kontum. 1974 – Carnation Revolution: A leftist military coup in Portugal overthrows the authoritarian-conservative Estado Novo regime and establishes a democratic government. 1980 – One hundred forty-six people are killed when Dan-Air Flight 1008 crashes near Los Rodeos Airport in Tenerife, Canary Islands. 1981 – More than 100 workers are exposed to radiation during repairs of at the Tsuruga Nuclear Power Plant in Japan. 1982 – Israel completes its withdrawal from the Sinai Peninsula per the Camp David Accords. 1983 – Cold War: American schoolgirl Samantha Smith is invited to visit the Soviet Union by its leader Yuri Andropov after he read her letter in which she expressed fears about nuclear war. 1983 – Pioneer 10 travels beyond Pluto's orbit. 1990 – Violeta Chamorro takes office as the President of Nicaragua, the first woman to hold the position. 2001 – President George W. Bush pledges U.S. military support in the event of a Chinese attack on Taiwan. 2004 – The March for Women's Lives brings between 500,000 and 800,000 protesters, mostly pro-choice, to Washington D.C. to protest the Partial-Birth Abortion Ban Act of 2003, and other restrictions on abortion. 2005 – The final piece of the Obelisk of Axum is returned to Ethiopia after being stolen by the invading Italian army in 1937. 2005 – A seven-car commuter train derails and crashes into an apartment building near Amagasaki Station in Japan, killing 107, including the driver. 2005 – Bulgaria and Romania sign the Treaty of Accession 2005 to join the European Union. 2007 – Boris Yeltsin's funeral: The first to be sanctioned by the Russian Orthodox Church for a head of state since the funeral of Emperor Alexander III in 1894. 2014 – The Flint water crisis begins when officials at Flint, Michigan switch the city's water supply to the Flint River, leading to lead and bacteria contamination. 2015 – Nearly 9,100 are killed after a massive 7.8 magnitude earthquake strikes Nepal. Births Pre-1600 1214 – Louis IX of France (d. 1270) 1228 – Conrad IV of Germany (d. 1254) 1284 – Edward II of England (d. 1327) 1287 – Roger Mortimer, 1st Earl of March, English politician, Lord Lieutenant of Ireland (d. 1330) 1502 – Georg Major, German theologian and academic (d. 1574) 1529 – Francesco Patrizi, Italian philosopher and scientist (d. 1597) 1599 – Oliver Cromwell, English general and politician, Lord Protector of Great Britain (d. 1658) 1601–1900 1621 – Roger Boyle, 1st Earl of Orrery, English soldier and politician (d. 1679) 1666 – Johann Heinrich Buttstett, German organist and composer (d. 1727) 1694 – Richard Boyle, 3rd Earl of Burlington, English architect and politician, Lord High Treasurer of Ireland (d. 1753) 1710 – James Ferguson, Scottish astronomer and author (d. 1776) 1723 – Giovanni Marco Rutini, Italian composer (d. 1797) 1725 – Augustus Keppel, 1st Viscount Keppel, English admiral and politician (d. 1786) 1767 – Nicolas Oudinot, French general (d. 1847) 1770 – Georg Sverdrup, Norwegian philologist and academic (d. 1850) 1776 – Princess Mary, Duchess of Gloucester and Edinburgh (d. 1857) 1843 – Princess Alice of the United Kingdom (d. 1878) 1849 – Felix Klein, German mathematician and academic (d. 1925) 1850 – Luise Adolpha Le Beau, German composer and educator (d. 1927) 1851 – Leopoldo Alas, Spanish author, critic, and academic (d. 1901) 1854 – Charles Sumner Tainter, American engineer and inventor (d. 1940) 1862 – Edward Grey, 1st Viscount Grey of Fallodon, English ornithologist and politician, Secretary of State for Foreign and Commonwealth Affairs (d. 1933) 1868 – John Moisant, American pilot and engineer (d. 1910) 1871 – Lorne Currie, French-English sailor (d. 1926) 1872 – C. B. Fry, English cricketer, footballer, educator, and politician (d. 1956) 1873 – Walter de la Mare, English poet, short story writer, and novelist (d. 1956) 1873 – Howard Garis, American author, creator of the Uncle Wiggily series of children's stories (d. 1962) 1874 – Guglielmo Marconi, Italian businessman and inventor, developed Marconi's law, Nobel Prize laureate (d. 1937) 1874 – Ernest Webb, English-Canadian race walker (d. 1937) 1876 – Jacob Nicol, Canadian publisher, lawyer, and politician (d. 1958) 1878 – William Merz, American gymnast and triathlete (d. 1946) 1882 – Fred McLeod, Scottish golfer (d. 1976) 1887 – Kojo Tovalou Houénou, Beninese lawyer and critic (d. 1936) 1892 – Maud Hart Lovelace, American author (d. 1980) 1896 – Fred Haney, American baseball player, coach, and manager (d. 1977) 1897 – Mary, Princess Royal and Countess of Harewood (d. 1965) 1900 – Gladwyn Jebb, English politician and diplomat, Secretary-General of the United Nations (d. 1996) 1900 – Wolfgang Pauli, Austrian-Swiss-American physicist and academic, Nobel Prize laureate (d. 1958) 1901–present 1902 – Werner Heyde, German psychiatrist and academic (d. 1964) 1902 – Mary Miles Minter, American actress (d. 1984) 1903 – Andrey Kolmogorov, Russian mathematician and academic (d. 1987) 1905 – George Nēpia, New Zealand rugby player and referee (d. 1986) 1906 – Joel Brand, member of the Budapest Aid and Rescue Committee (d. 1964) 1906 – William J. Brennan Jr., American colonel and Associate Justice of the United States Supreme Court (d. 1997) 1908 – Edward R. Murrow, American journalist (d. 1965) 1909 – William Pereira, American architect, designed the Transamerica Pyramid (d. 1985) 1910 – Arapeta Awatere, New Zealand interpreter, military leader, politician, and murderer (d. 1976) 1911 – Connie Marrero, Cuban baseball player and coach (d. 2014) 1912 – Earl Bostic, African-American saxophonist (d. 1965) 1913 – Nikolaos Roussen, Greek captain (d. 1944) 1914 – Ross Lockridge Jr., American author and academic (d. 1948) 1915 – Mort Weisinger, American journalist and author (d. 1978) 1916 – Jerry Barber, American golfer (d. 1994) 1917 – Ella Fitzgerald, American singer (d. 1996) 1917 – Jean Lucas, French racing driver (d. 2003) 1918 – Graham Payn, South African-born English actor and singer (d. 2005) 1918 – Gérard de Vaucouleurs, French-American astronomer and academic (d. 1995) 1918 – Astrid Varnay, Swedish-American soprano and actress (d. 2006) 1919 – Finn Helgesen, Norwegian speed skater (d. 2011) 1921 – Karel Appel, Dutch painter and sculptor (d. 2006) 1923 – Francis Graham-Smith, English astronomer and academic 1923 – Melissa Hayden, Canadian ballerina (d. 2006) 1923 – Albert King, African-American singer-songwriter, guitarist, and producer (d. 1992) 1924 – Ingemar Johansson, Swedish race walker (d. 2009) 1924 – Franco Mannino, Italian pianist, composer, director, and playwright (d. 2005) 1924 – Paulo Vanzolini, Brazilian singer-songwriter and zoologist (d. 2013) 1925 – Tony Christopher, Baron Christopher, English trade union leader and businessman 1925 – Sammy Drechsel, German comedian and journalist (d. 1986) 1925 – Louis O'Neil, Canadian academic and politician (d. 2018) 1926 – Johnny Craig, American author and illustrator (d. 2001) 1926 – Gertrude Fröhlich-Sandner, Austrian politician (d. 2008) 1926 – Patricia Castell, Argentine actress (d. 2013) 1927 – Corín Tellado, Spanish author (d. 2009) 1927 – Albert Uderzo, French author and illustrator (d. 2020) 1928 – Cy Twombly, American-Italian painter and sculptor (d. 2011) 1929 – Yvette Williams, New Zealand long jumper, shot putter, and discus thrower (d. 2019) 1930 – Paul Mazursky, American actor, director, and screenwriter (d. 2014) 1930 – Godfrey Milton-Thompson, English admiral and surgeon (d. 2012) 1930 – Peter Schulz, German lawyer and politician, Mayor of Hamburg (d. 2013) 1931 – Felix Berezin, Russian mathematician and physicist (d. 1980) 1931 – David Shepherd, English painter and author (d. 2017) 1932 – Nikolai Kardashev, Russian astrophysicist (d. 2019) 1932 – Meadowlark Lemon, African-American basketball player and minister (d. 2015) 1932 – Lia Manoliu, Romanian discus thrower and politician (d. 1998) 1933 – Jerry Leiber, American songwriter and producer (d. 2011) 1933 – Joyce Ricketts, American baseball player (d. 1992) 1934 – Peter McParland, Northern Irish footballer and manager 1935 – Bob Gutowski, American pole vaulter (d. 1960) 1935 – Reinier Kreijermaat, Dutch footballer (d. 2018) 1936 – Henck Arron, Surinamese banker and politician, 1st Prime Minister of the Republic of Suriname (d. 2000) 1938 – Roger Boisjoly, American aerodynamicist and engineer (d. 2012) 1938 – Ton Schulten, Dutch painter and graphic designer 1939 – Tarcisio Burgnich, Italian footballer and manager (d. 2021) 1939 – Michael Llewellyn-Smith, English academic and diplomat 1939 – Robert Skidelsky, Baron Skidelsky, English historian and academic 1939 – Veronica Sutherland, English academic and British diplomat 1940 – Al Pacino, American actor and director 1941 – Bertrand Tavernier, French actor, director, producer, and screenwriter (d. 2021) 1942 – Jon Kyl, American lawyer and politician 1943 – Tony Christie, English singer-songwriter and actor 1944 – Len Goodman, English dancer (d. 2023) 1944 – Mike Kogel, German singer-songwriter 1944 – Stephen Nickell, English economist and academic 1944 – Bruce Ponder, English geneticist and cancer researcher 1945 – Stu Cook, American bass player Creedence Clearwater Revival, songwriter, and producer 1945 – Richard C. Hoagland, American theorist and author 1945 – Björn Ulvaeus, Swedish singer-songwriter and producer 1946 – Talia Shire, American actress 1946 – Peter Sutherland, Irish lawyer and politician, Attorney General of Ireland (d. 2018) 1946 – Vladimir Zhirinovsky, Russian colonel, lawyer, and politician (d. 2022) 1947 – Johan Cruyff, Dutch footballer and manager (d. 2016) 1947 – Jeffrey DeMunn, American actor 1947 – Cathy Smith, Canadian singer and drug dealer (d. 2020) 1948 – Mike Selvey, English cricketer and sportscaster 1948 – Yu Shyi-kun, Taiwanese politician, 39th Premier of the Republic of China 1949 – Vicente Pernía, Argentinian footballer and race car driver 1949 – Dominique Strauss-Kahn, French economist, lawyer, and politician, French Minister of Finance 1949 – James Fenton, English poet, journalist and literary critic 1950 – Donnell Deeny, Northern Irish lawyer and judge 1950 – Steve Ferrone, English drummer 1950 – Peter Hintze, German politician (d. 2016) 1950 – Valentyna Kozyr, Ukrainian high jumper 1951 – Ian McCartney, Scottish politician, Minister of State for Trade 1952 – Ketil Bjørnstad, Norwegian pianist and composer 1952 – Vladislav Tretiak, Russian ice hockey player and coach 1952 – Jacques Santini, French footballer and coach 1953 – Ron Clements, American animator, producer, and screenwriter 1953 – Gary Cosier, Australian cricketer 1953 – Anthony Venables, English economist, author, and academic 1954 – Melvin Burgess, English author 1954 – Randy Cross, American football player and sportscaster 1954 – Róisín Shortall, Irish educator and politician 1955 – Américo Gallego, Argentinian footballer and coach 1955 – Parviz Parastui, Iranian actor and singer 1955 – Zev Siegl, American businessman, co-founded Starbucks 1956 – Dominique Blanc, French actress, director, and screenwriter 1956 – Abdalla Uba Adamu, Nigerian professor, media scholar 1957 – Theo de Rooij, Dutch cyclist and manager 1958 – Fish, Scottish singer-songwriter 1958 – Misha Glenny, British journalist 1959 – Paul Madden, English diplomat, British High Commissioner to Australia 1959 – Daniel Kash, Canadian actor and director 1959 – Tony Phillips, American baseball player (d. 2016) 1960 – Paul Baloff, American singer (d. 2002) 1960 – Robert Peston, English journalist 1961 – Dinesh D'Souza, Indian-American journalist and author 1961 – Miran Tepeš, Slovenian ski jumper 1962 – Foeke Booy, Dutch footballer and manager 1963 – Joy Covey, American businesswoman (d. 2013) 1963 – Dave Martin, English footballer 1963 – David Moyes, Scottish footballer and manager 1963 – Bernd Müller, German footballer and manager 1963 – Paul Wassif, English singer-songwriter and guitarist 1964 – Hank Azaria, American actor, voice artist, comedian and producer 1964 – Andy Bell, English singer-songwriter 1965 – Eric Avery, American bass player and songwriter 1965 – Mark Bryant, American basketball player and coach 1965 – John Henson, American puppeteer and voice actor (d. 2014) 1966 – Diego Domínguez, Argentinian-Italian rugby player 1966 – Femke Halsema, Dutch sociologist, academic, and politician 1966 – Darren Holmes, American baseball player and coach 1966 – Erik Pappas, American baseball player and coach 1967 – Angel Martino, American swimmer 1968 – Vitaliy Kyrylenko, Ukrainian long jumper 1968 – Thomas Strunz, German footballer 1969 – Joe Buck, American sportscaster 1969 – Martin Koolhoven, Dutch director and screenwriter 1969 – Jon Olsen, American swimmer 1969 – Darren Woodson, American football player and sportscaster 1969 – Renée Zellweger, American actress and producer 1970 – Jason Lee, American skateboarder, actor, comedian and producer 1971 – Sara Baras, Spanish dancer 1971 – Brad Clontz, American baseball player 1973 – Carlota Castrejana, Spanish triple jumper 1973 – Fredrik Larzon, Swedish drummer 1973 – Barbara Rittner, German tennis player 1975 – Jacque Jones, American baseball player and coach 1976 – Gilberto da Silva Melo, Brazilian footballer 1976 – Tim Duncan, American basketball player 1976 – Breyton Paulse, South African rugby player 1976 – Rainer Schüttler, German tennis player and coach 1977 – Constantinos Christoforou, Cypriot singer-songwriter 1977 – Ilias Kotsios, Greek footballer 1977 – Marguerite Moreau, American actress and producer 1977 – Matthew West, American singer-songwriter, guitarist, and actor 1978 – Matt Walker, English swimmer 1980 – Ben Johnston, Scottish drummer and songwriter 1980 – James Johnston, Scottish bass player and songwriter 1980 – Daniel MacPherson, Australian actor and television host 1980 – Bruce Martin, New Zealand cricketer 1980 – Kazuhito Tadano, Japanese baseball player 1980 – Alejandro Valverde, Spanish cyclist 1981 – Dwone Hicks, American football player 1981 – Felipe Massa, Brazilian racing driver 1981 – John McFall, English sprinter 1981 – Anja Pärson, Swedish skier 1982 – Brian Barton, American baseball player 1982 – Monty Panesar, English cricketer 1982 – Marco Russo, Italian footballer 1983 – Johnathan Thurston, Australian rugby league player 1983 – DeAngelo Williams, American football player 1984 – Robert Andino, American baseball player 1984 – Isaac Kiprono Songok, Kenyan runner 1985 – Giedo van der Garde, Dutch racing driver 1986 – Alexei Emelin, Russian ice hockey player 1986 – Thin Seng Hon, Cambodian Paralympic athlete 1986 – Gwen Jorgensen, American triathlete 1986 – Claudia Rath, German heptathlete 1987 – Razak Boukari, Togolese footballer 1987 – Jay Park, American-South Korean singer-songwriter and dancer 1987 – Johann Smith, American soccer player 1988 – Sara Paxton, American actress 1988 – James Sheppard, Canadian ice hockey player 1989 – Marie-Michèle Gagnon, Canadian skier 1989 – Michael van Gerwen, Dutch darts player 1989 – Gedhun Choekyi Nyima, the 11th Panchen Lama 1990 – Jean-Éric Vergne, French racing driver 1990 – Taylor Walker, Australian footballer 1991 – Jordan Poyer, American football player 1991 – Alex Shibutani, American ice dancer 1993 – Alex Bowman, American race car driver 1993 – Daniel Norris, American baseball player 1993 – Raphaël Varane, French footballer 1994 – Omar McLeod, Jamaican hurdler 1995 – Lewis Baker, English footballer 1996 – Mack Horton, Australian swimmer 1997 – Julius Ertlthaler, Austrian footballer Deaths Pre-1600 501 – Rusticus, saint and archbishop of Lyon (b. 455) 775 – Smbat VII Bagratuni, Armenian prince 775 – Mushegh VI Mamikonian, Armenian prince 908 – Zhang Wenwei, Chinese chancellor 1074 – Herman I, Margrave of Baden 1077 – Géza I of Hungary (b. 1040) 1185 – Emperor Antoku of Japan (b. 1178) 1217 – Hermann I, Landgrave of Thuringia 1228 – Queen Isabella II of Jerusalem (b. 1212) 1243 – Boniface of Valperga, Bishop of Aosta 1264 – Roger de Quincy, 2nd Earl of Winchester, medieval English nobleman; Earl of Winchester (b. 1195) 1295 – Sancho IV of Castile (b. 1258) 1342 – Pope Benedict XII (b. 1285) 1397 – Thomas Holland, 2nd Earl of Kent, English nobleman 1472 – Leon Battista Alberti, Italian author, poet, and philosopher (b. 1404) 1516 – John Yonge, English diplomat (b. 1467) 1566 – Louise Labé, French poet and author (b. 1520) 1566 – Diane de Poitiers, mistress of King Henry II of France (b. 1499) 1595 – Torquato Tasso, Italian poet and songwriter (b. 1544) 1601–1900 1605 – Naresuan, Siamese King of Ayutthaya Kingdom (b. c. 1555) 1644 – Chongzhen Emperor of China (b. 1611) 1660 – Henry Hammond, English cleric and theologian (b. 1605) 1690 – David Teniers the Younger, Flemish painter and educator (b. 1610) 1744 – Anders Celsius, Swedish astronomer, physicist, and mathematician (b. 1701) 1770 – Jean-Antoine Nollet, French minister, physicist, and academic (b. 1700) 1800 – William Cowper, English poet (b. 1731) 1840 – Siméon Denis Poisson, French mathematician and physicist (b. 1781) 1873 – Fyodor Petrovich Tolstoy, Russian painter and sculptor (b. 1783) 1875 – 12th Dalai Lama (b. 1857) 1878 – Anna Sewell, English author (b. 1820) 1890 – Crowfoot, Canadian tribal chief (b. 1830) 1891 – Nathaniel Woodard, English priest and educator (b. 1811) 1892 – Henri Duveyrier, French explorer (b. 1840) 1892 – Karl von Ditmar, Estonian-German geologist and explorer (b. 1822) 1901–present 1906 – John Knowles Paine, American composer and educator (b. 1839) 1911 – Emilio Salgari, Italian journalist and author (b. 1862) 1913 – Joseph-Alfred Archambeault, Canadian bishop (b. 1859) 1915 – Frederick W. Seward, American journalist, lawyer, and politician, 6th United States Assistant Secretary of State (b. 1830) 1919 – Augustus D. Juilliard, American businessman and philanthropist (b. 1836) 1921 – Emmeline B. Wells, American journalist and women's rights advocate (b. 1828) 1923 – Louis-Olivier Taillon, Canadian lawyer and politician, 8th Premier of Quebec (b. 1840) 1928 – Pyotr Nikolayevich Wrangel, Russian general (b. 1878) 1936 – Wajed Ali Khan Panni, Bengali aristocrat and philanthropist (b. 1871) 1941 – Salih Bozok, Turkish commander and politician (b. 1881) 1943 – Vladimir Nemirovich-Danchenko, Russian director, producer, and playwright (b. 1858) 1944 – George Herriman, American cartoonist (b. 1880) 1944 – Tony Mullane, Irish-American baseball player (b. 1859) 1944 – William Stephens, American engineer and politician, 24th Governor of California (b. 1859) 1945 – Huldreich Georg Früh, Swiss composer (b. 1903) 1961 – Robert Garrett, American discus thrower and shot putter (b. 1875) 1970 – Anita Louise, American actress (b. 1915) 1972 – George Sanders, English actor (b. 1906) 1973 – Olga Grey, Hungarian-American actress (b. 1896) 1974 – Gustavo R. Vincenti, Maltese architect and developer (b. 1888) 1975 – Mike Brant, Israeli singer and songwriter (b.1947) 1976 – Carol Reed, English director and producer (b. 1906) 1976 – Markus Reiner, Israeli engineer and educator (b. 1886) 1982 – John Cody, American cardinal (b. 1907) 1983 – William S. Bowdern, American priest and author (b. 1897) 1988 – Carolyn Franklin, American singer-songwriter (b. 1944) 1988 – Clifford D. Simak, American journalist and author (b. 1904) 1990 – Dexter Gordon, American saxophonist, composer, and actor (b. 1923) 1992 – Yutaka Ozaki, Japanese singer-songwriter (b. 1965) 1995 – Art Fleming, American game show host (b. 1925) 1995 – Ginger Rogers, American actress, singer, and dancer (b. 1911) 1995 – Lev Shankovsky, Ukrainian military historian (b. 1903) 1996 – Saul Bass, American graphic designer and director (b. 1920) 1998 – Wright Morris, American author and photographer (b. 1910) 1999 – Michael Morris, 3rd Baron Killanin, Irish journalist and author (b. 1914) 1999 – Roger Troutman, American singer-songwriter and producer (b. 1951) 2000 – Lucien Le Cam, French mathematician and statistician (b. 1924) 2000 – David Merrick, American director and producer (b. 1911) 2001 – Michele Alboreto, Italian racing driver (b. 1956) 2002 – Lisa Lopes, American rapper and dancer (b. 1971) 2003 – Samson Kitur, Kenyan runner (b. 1966) 2004 – Thom Gunn, English-American poet and academic (b. 1929) 2005 – Jim Barker, American politician (b. 1935) 2005 – Swami Ranganathananda, Indian monk and educator (b. 1908) 2006 – Jane Jacobs, American-Canadian journalist, author, and activist (b. 1916) 2006 – Peter Law, Welsh politician and independent member of parliament (b. 1948) 2007 – Alan Ball Jr., English footballer and manager (b. 1945) 2007 – Arthur Milton, English footballer and cricketer (b. 1928) 2007 – Bobby Pickett, American singer-songwriter (b. 1938) 2008 – Humphrey Lyttelton, English trumpet player, composer, and radio host (b. 1921) 2009 – Bea Arthur, American actress and singer (b. 1922) 2010 – Dorothy Provine, American actress and singer (b. 1935) 2010 – Alan Sillitoe, English novelist, short story writer, essayist, and poet (b. 1928) 2011 – Poly Styrene, British musician (b. 1957) 2012 – Gerry Bahen, Australian footballer (b. 1929) 2012 – Denny Jones, American rancher and politician (b. 1910) 2012 – Moscelyne Larkin, American ballerina and educator (b. 1925) 2012 – Louis le Brocquy, Irish painter and illustrator (b. 1916) 2013 – Brian Adam, Scottish biochemist and politician (b. 1948) 2013 – Jacob Avshalomov, American composer and conductor (b. 1919) 2013 – György Berencsi, Hungarian virologist and academic (b. 1941) 2013 – Rick Camp, American baseball player (b. 1953) 2014 – Dan Heap, Canadian priest and politician (b. 1925) 2014 – William Judson Holloway Jr., American soldier, lawyer, and judge (b. 1923) 2014 – Earl Morrall, American football player and coach (b. 1934) 2014 – Tito Vilanova, Spanish footballer and manager (b. 1968) 2014 – Stefanie Zweig, German journalist and author (b. 1932) 2015 – Jim Fanning, American-Canadian baseball player and manager (b. 1927) 2015 – Matthias Kuhle, German geographer and academic (b. 1948) 2015 – Don Mankiewicz, American screenwriter and novelist (b. 1922) 2015 – Mike Phillips, American basketball player (b. 1956) 2016 – Tom Lewis, Australian politician, 33rd Premier of New South Wales (b. 1922) 2018 – Madeeha Gauhar, Pakistani actress, playwright and director of social theater, and women's rights activist (b. 1956) 2019 – John Havlicek, American basketball player (b. 1940) 2023 – Harry Belafonte, American singer, activist, and actor (b. 1927) Holidays and observances Anzac Day (Australia, New Zealand, Tonga) Christian feast day: Giovanni Battista Piamarta Major Rogation (Western Christianity) Mark the Evangelist Maughold Peter of Saint Joseph de Betancur Philo and Agathopodes Anianus of Alexandria April 25 (Eastern Orthodox liturgics) Freedom Day (Portugal) Liberation Day (Italy) Military Foundation Day (North Korea) World Malaria Day References External links BBC: On This Day Historical Events on April 25 Days of the year April
2736
https://en.wikipedia.org/wiki/Andalusia
Andalusia
Andalusia (, ; ) is the southernmost autonomous community in Peninsular Spain. Andalusia is located in the south of the Iberian Peninsula, in southwestern Europe. It is the most populous and the second-largest autonomous community in the country. It is officially recognised as a historical nationality and a national reality. The territory is divided into eight provinces: Almería, Cádiz, Córdoba, Granada, Huelva, Jaén, Málaga, and Seville. Its capital city is Seville. The seat of the High Court of Justice of Andalusia is located in the city of Granada. Andalusia is immediately south of the autonomous communities of Extremadura and Castilla-La Mancha; west of the autonomous community of Murcia and the Mediterranean Sea; east of Portugal and the Atlantic Ocean; and north of the Mediterranean Sea and the Strait of Gibraltar. Gibraltar shares a land border with the Andalusian portion of the province of Cádiz at the eastern end of the Strait of Gibraltar. The main mountain ranges of Andalusia are the Sierra Morena and the Baetic System, consisting of the Subbaetic and Penibaetic Mountains, separated by the Intrabaetic Basin. In the north, the Sierra Morena separates Andalusia from the plains of Extremadura and Castile–La Mancha on Spain's Meseta Central. To the south, the geographic subregion of lies mostly within the Baetic System, while is in the Baetic Depression of the valley of the Guadalquivir. The name "Andalusia" is derived from the Arabic word Al-Andalus (الأندلس), which in turn may be derived from the Vandals, the Goths or pre-Roman Iberian tribes. The toponym al-Andalus is first attested by inscriptions on coins minted in 716 by the new Muslim government of Iberia. These coins, called dinars, were inscribed in both Latin and Arabic. The region's history and culture have been influenced by the Tartessians, Iberians, Phoenicians, Carthaginians, Greeks, Romans, Vandals, Visigoths, Byzantines, Berbers, Arabs, Jews, Romanis and Castilians. During the Islamic Golden Age, Córdoba surpassed Constantinople to be Europe's biggest city, and became the capital of Al-Andalus and a prominent center of education and learning in the world, producing numerous philosophers and scientists. The Crown of Castile conquered and settled the Guadalquivir Valley in the 13th century. The mountainous eastern part of the region (the Kingdom of Granada) was subdued in the late 15th century. Atlantic-facing harbors prospered upon trade with the New World. Chronic inequalities in the social structure caused by uneven distribution of land property in large estates induced recurring episodes of upheaval and social unrest in the agrarian sector in the 19th and 20th centuries. Andalusia has historically been an agricultural region, compared to the rest of Spain and the rest of Europe. Still, the growth of the community in the sectors of industry and services was above average in Spain and higher than many communities in the Eurozone. The region has a rich culture and a strong identity. Many cultural phenomena that are seen internationally as distinctively Spanish are largely or entirely Andalusian in origin. These include flamenco and, to a lesser extent, bullfighting and Hispano-Moorish architectural styles, both of which are also prevalent in some other regions of Spain. Andalusia's hinterland is the hottest area of Europe, with Córdoba and Seville averaging above in summer high temperatures. These high temperatures, typical of the Guadalquivir valley (and other valleys in southern Iberia) are usually reached between 5 p.m. and 9 p.m. (local time), tempered by sea and mountain breezes afterwards. However, during heat waves late evening temperatures can locally stay around until close to midnight, and daytime highs of over are common. Also, Seville is the warmest city in continental Europe with average annual temperature of . Name Its present form is derived from the Arabic name for Muslim Iberia, "Al-Andalus". The etymology of the name "Al-Andalus" is disputed, and the extent of Iberian territory encompassed by the name has changed over the centuries. Traditionally it has been assumed to be derived from the name of the Vandals. Since the 1980s, a number of proposals have challenged this contention. Halm, in 1989, derived the name from a Gothic term, *, and in 2002, Bossong suggested its derivation from a pre-Roman substrate. The Spanish place name Andalucía (immediate source of the English Andalusia) was introduced into the Spanish languages in the 13th century under the form el Andalucía. The name was adopted to refer to those territories still under Moorish rule, and generally south of Castilla Nueva and Valencia, and corresponding with the former Roman province hitherto called Baetica in Latin sources. This was a Castilianization of Al-Andalusiya, the adjectival form of the Arabic language al-Andalus, the name given by the Arabs to all of the Iberian territories under Muslim rule from 711 to 1492. The etymology of al-Andalus is itself somewhat debated (see al-Andalus), but in fact it entered the Arabic language before this area came under Moorish rule. Like the Arabic term al-Andalus, in historical contexts the Spanish term Andalucía or the English term Andalusia do not necessarily refer to the exact territory designated by these terms today. Initially, the term referred exclusively to territories under Muslim control. Later, it was applied to some of the last Iberian territories to be regained from the Muslims, though not always to exactly the same ones. In the Estoria de España (also known as the Primera Crónica General) of Alfonso X of Castile, written in the second half of the 13th century, the term Andalucía is used with three different meanings: As a literal translation of the Arabic al-Ándalus when Arabic texts are quoted. To designate the territories the Christians had regained by that time in the Guadalquivir valley and in the Kingdoms of Granada and Murcia. In a document from 1253, Alfonso X styled himself Rey de Castilla, León y de toda Andalucía ("King of Castile, León and all of Andalusia"). To designate the territories the Christians had regained by that time in the Guadalquivir valley until that date (the Kingdoms of Jaén, Córdoba and Seville – the Kingdom of Granada was incorporated in 1492). This was the most common significance in the Late Middle Ages and Early modern period. From an administrative point of view, Granada remained separate for many years even after the completion of the Reconquista due, above all, to its emblematic character as the last territory regained, and as the seat of the important Real Chancillería de Granada, a court of last resort. Still, the reconquest and repopulation of Granada was accomplished largely by people from the three preexisting Christian kingdoms of Andalusia, and Granada came to be considered a fourth kingdom of Andalusia. The often-used expression "Four Kingdoms of Andalusia" dates back in Spanish at least to the mid-18th century. Symbols The Andalusian emblem shows the figure of Hercules and two lions between the two pillars of Hercules that tradition situates on either side of the Strait of Gibraltar. An inscription below, superimposed on an image of the flag of Andalusia reads Andalucía por sí, para España y la Humanidad ("Andalusia for herself, Spain and Humanity"). Over the two columns is a semicircular arch in the colours of the flag of Andalusia, with the Latin words Dominator Hercules Fundator (Lord Hercules is the Founder) superimposed. The official flag of Andalusia consists of three equal horizontal stripes, coloured green, white, and green respectively; the Andalusian coat of arms is superimposed on the central stripe. Its design was overseen by Blas Infante and approved in the Assembly of Ronda (a 1918 gathering of Andalusian nationalists at Ronda). Blas Infante considered these to have been the colours most used in regional symbols throughout the region's history. According to him, the green came in particular from the standard of the Umayyad Caliphate and represented the call for a gathering of the populace. The white symbolised pardon in the Almohad dynasty, interpreted in European heraldry as parliament or peace. Other writers have justified the colours differently, with some Andalusian nationalists referring to them as the Arbonaida, meaning white-and-green in Mozarabic, a Romance language that was spoken in the region in Muslim times. Nowadays, the Andalusian government states that the colours of the flag evoke the Andalusian landscape as well as values of purity and hope for the future. The anthem of Andalusia was composed by José del Castillo Díaz (director of the Municipal Band of Seville, commonly known as Maestro Castillo) with lyrics by Blas Infante. The music was inspired by Santo Dios, a popular religious song sung at harvest time by peasants and day labourers in the provinces of Málaga, Seville, and Huelva. Blas Infante brought the song to Maestro Castillo's attention; Maestro Castillo adapted and harmonized the traditional melody. The lyrics appeal to the Andalusians to mobilise and demand tierra y libertad ("land and liberty") by way of agrarian reform and a statute of autonomy within Spain. The Parliament of Andalusia voted unanimously in 1983 that the preamble to the Statute of Autonomy recognise Blas Infante as the Father of the Andalusian Nation (Padre de la Patria Andaluza), which was reaffirmed in the reformed Statute of Autonomy submitted to popular referendum 18 February 2007. The preamble of the present 2007 Statute of Autonomy says that Article 2 of the present Spanish Constitution of 1978 recognises Andalusia as a nationality. Later, in its articulation, it speaks of Andalusia as a "historic nationality" (Spanish: nacionalidad histórica). It also cites the 1919 Andalusianist Manifesto of Córdoba describing Andalusia as a "national reality" (realidad nacional), but does not endorse that formulation. Article 1 of the earlier 1981 Statute of Autonomy defined it simply as a "nationality" (nacionalidad). The national holiday, Andalusia Day, is celebrated on 28 February, commemorating the 1980 autonomy referendum. The honorific title of Hijo Predilecto de Andalucía ("Favourite Son of Andalusia") is granted by the Autonomous Government of Andalusia to those whose exceptional merits benefited Andalusia, for work or achievements in natural, social, or political science. It is the highest distinction given by the Autonomous Community of Andalusia. Geography The Sevillian historian Antonio Domínguez Ortiz wrote that: Location Andalusia has a surface area of , 17.3% of the territory of Spain. Andalusia alone is comparable in extent and in the variety of its terrain to any of several of the smaller European countries. To the east is the Mediterranean Sea; to the west Portugal and the Gulf of Cádiz (Atlantic Ocean); to the north the Sierra Morena constitutes the border with the Meseta Central; to the south, the self-governing British overseas territory of Gibraltar and the Strait of Gibraltar separate it from Morocco. Climate Andalusia is home to the hottest and driest climates in Spain, with yearly average rainfall around in Cabo de Gata, as well as some of the wettest ones, with yearly average rainfall above in inland Cádiz. In the west, weather systems sweeping in from the Atlantic ensure that it is relatively wet and humid in the winter, with some areas receiving copious amounts. Contrary to what many people think, as a whole, the region enjoys above-average yearly rainfall in the context of Spain. Andalusia sits at a latitude between 36° and 38° 44' N, in the warm-temperate region. In general, it experiences a hot-summer Mediterranean climate, with dry summers influenced by the Azores High, but subject to occasional torrential rains and extremely hot temperatures. In the winter, the tropical anticyclones move south, allowing cold polar fronts to penetrate the region. Still, within Andalusia there is considerable climatic variety. From the extensive coastal plains one may pass to the valley of the Guadalquivir, barely above sea level, then to the highest altitudes in the Iberian peninsula in the peaks of the Sierra Nevada. In a mere one can pass from the subtropical coast of the province of Granada to the snowy peaks of Mulhacén. Andalusia also includes both the dry Tabernas Desert in the province of Almería and the Sierra de Grazalema Natural Park in the province of Cádiz, which experiences Spain's greatest rainfall. Annual rainfall in the Sierra de Grazalema has been measured as high as in 1963, the highest ever recorded for any location in Iberia. Andalusia is also home to the driest place in continental Europe, the Cabo de Gata, with only of rain per year. In general, as one goes from west to east, away from the Atlantic, there is less precipitation. "Wet Andalusia" includes most of the highest points in the region, above all the Sierra de Grazalema but also the Serranía de Ronda in western Málaga. The valley of the Guadalquivir has moderate rainfall. The Tabernas Desert in Almería, Europe's only true desert, has less than 75 days with any measurable precipitation, and some particular places in the desert have as few as 50 such days. Much of "dry Andalusia" has more than 300 sunny days a year. The average temperature in Andalusia throughout the year is over . Averages in the cities range from in Baeza to in Almería. Much of the Guadalquivir valley and the Mediterranean coast has an average of about . The coldest month is January when Granada at the foot of the Sierra Nevada experiences an average temperature of . The hottest are July and August, with an average temperature of for Andalusia as a whole. Córdoba is the hottest provincial capital, followed by Seville. The Guadalquivir valley has experienced some of the highest temperatures recorded in Europe, with a maximum of recorded at Córdoba (14 August 2021), and Seville. The mountains of Granada and Jaén have the coldest temperatures in southern Iberia, but do not reach continental extremes (and, indeed are surpassed by some mountains in northern Spain). In the cold snap of January 2005, Santiago de la Espada (Jaén) experienced a temperature of and the ski resort at Sierra Nevada National Park—the southernmost ski resort in Europe—dropped to . Sierra Nevada Natural Park has Iberia's lowest average annual temperature, ( at Pradollano) and its peaks remain snowy practically year-round. Terrain Mountain ranges affect climate, the network of rivers, soils and their erosion, bioregions, and even human economies insofar as they rely on natural resources. The Andalusian terrain offers a range of altitudes and slopes. Andalusia has the Iberian peninsula's highest mountains and nearly 15 percent of its terrain over . The picture is similar for areas under (with the Baetic Depression), and for the variety of slopes. The Atlantic coast is overwhelmingly beach and gradually sloping coasts; the Mediterranean coast has many cliffs, above all in the Malagan Axarquía and in Granada and Almería. This asymmetry divides the region naturally into (two mountainous areas) and (the broad basin of the Guadalquivir). The Sierra Morena separates Andalusia from the plains of Extremadura and Castile–La Mancha on Spain's Meseta Central. Although sparsely populated, this is not a particularly high range, and its highest point, the peak of La Bañuela in the Sierra Madrona, lies outside of Andalusia. Within the Sierra Morena, the gorge of Despeñaperros forms a natural frontier between Castile and Andalusia. The Baetic Cordillera consists of the parallel mountain ranges of the Cordillera Penibética near the Mediterranean coast and the Cordillera Subbética inland, separated by the Surco Intrabético. The Cordillera Subbética is quite discontinuous, offering many passes that facilitate transportation, but the Penibético forms a strong barrier between the Mediterranean coast and the interior. The Sierra Nevada, part of the Cordillera Penibética in the Province of Granada, has the highest peaks in Iberia: El Mulhacén at and El Veleta at . Lower Andalusia, the Baetic Depression, the basin of the Guadalquivir, lies between these two mountainous areas. It is a nearly flat territory, open to the Gulf of Cádiz in the southeast. Throughout history, this has been the most populous part of Andalusia. Hydrography Andalusia has rivers that flow into both the Atlantic and the Mediterranean. Flowing to the Atlantic are the Guadiana, Odiel-Tinto, Guadalquivir, Guadalete, and Barbate. Flowing to the Mediterranean are the Guadiaro, Guadalhorce, Guadalmedina, Guadalfeo, Andarax (also known as the Almería) and Almanzora. Of these, the Guadalquivir is the longest in Andalusia and fifth longest on the Iberian peninsula, at . The rivers of the Atlantic basin are characteristically long, run through mostly flat terrain, and have broad river valleys. As a result, at their mouths are estuaries and wetlands, such as the marshes of Doñana in the delta of the Guadalquivir, and wetlands of the Odiel. In contrast, the rivers of the Mediterranean Basin are shorter, more seasonal, and make a precipitous descent from the mountains of the Baetic Cordillera. Their estuaries are small, and their valleys are less suitable for agriculture. Also, being in the rain shadow of the Baetic Cordillera means that they receive a lesser volume of water. The following hydrographic basins can be distinguished in Andalusia. On the Atlantic side are the Guadalquivir basin; the Andalusian Atlantic Basin with the sub-basins Guadalete-Barbate and Tinto-Odiel; and the Guadiana basin. On the Mediterranean side is the Andalusian Mediterranean Basin and the upper portion of the basin of the Segura. Soils The soils of Andalusia can be divided into three large areas: the Sierra Morena, Cordillera Subbética, and the Baetic Depression and the Surco Intrabético. The Sierra Morena, due to its morphology and the acidic content of its rocks, developed principally relatively poor, shallow soils, suitable only for forests. In the valleys and in some areas where limestone is present, deeper soils allowed farming of cereals suitable for livestock. The more complicated morphology of the Baetic Cordillera makes it more heterogeneous, with the most heterogeneous soils in Andalusia. Very roughly, in contrast to the Sierra Morena, a predominance of basic (alkaline) materials in the Cordillera Subbética, combined with a hilly landscape, generates deeper soils with greater agricultural capacity, suitable to the cultivation of olives. Finally, the Baetic Depression and the Surco Intrabético have deep, rich soils, with great agricultural capacity. In particular, the alluvial soils of the Guadalquivir valley and plain of Granada have a loamy texture and are particularly suitable for intensive irrigated crops. In the hilly areas of the countryside, there is a double dynamic: the depressions have filled with older lime-rich material, developing the deep, rich, dark clay soils the Spanish call bujeo, or tierras negras andaluzas, excellent for dryland farming. In other zones, the whiter albariza provides an excellent soil for vineyards. Despite their marginal quality, the poorly consolidated soils of the sandy coastline of Huelva and Almería have been successfully used in recent decades for hothouse cultivation under clear plastic of strawberries, raspberries, blueberries, and other fruits. Flora Biogeographically, Andalusia forms part of the Western Mediterranean subregion of the Mediterranean Basin, which falls within the Boreal Kingdom. Five floristic provinces lie, in whole or in part, within Andalusia: along much of the Atlantic coast, the Lusitanian-Andalusian littoral or Andalusian Atlantic littoral; in the north, the southern portion of the Luso-Extremaduran floristic province; covering roughly half of the region, the Baetic floristic province; and in the extreme east, the Almerian portion of the Almerian-Murcian floristic province and (coinciding roughly with the upper Segura basin) a small portion of the Castilian-Maestrazgan-Manchegan floristic province. These names derive primarily from past or present political geography: "Luso" and "Lusitanian" from Lusitania, one of three Roman provinces in Iberia, most of the others from present-day Spanish provinces, and Maestrazgo being a historical region of northern Valencia. In broad terms, the typical vegetation of Andalusia is Mediterranean woodland, characterized by leafy xerophilic perennials, adapted to the long, dry summers. The dominant species of the climax community is the holly oak (Quercus ilex). Also abundant are cork oak (Quercus suber), various pines, and Spanish fir (Abies pinsapo). Due to cultivation, olive (Olea europaea) and almond (Prunus dulcis) trees also abound. The dominant understory is composed of thorny and aromatic woody species, such as rosemary (Rosmarinus officinalis), thyme (Thymus), and Cistus. In the wettest areas with acidic soils, the most abundant species are the oak and cork oak, and the cultivated Eucalyptus. In the woodlands, leafy hardwoods of genus Populus (poplars, aspens, cottonwoods) and Ulmus (elms) are also abundant; poplars are cultivated in the plains of Granada. The Andalusian woodlands have been much altered by human settlement, the use of nearly all of the best land for farming, and frequent wildfires. The degraded forests become shrubby and combustible garrigue. Extensive areas have been planted with non-climax trees such as pines. There is now a clear conservation policy for the remaining forests, which survive almost exclusively in the mountains. Fauna The biodiversity of Andalusia extends to its fauna as well. More than 400 of the 630 vertebrate species extant in Spain can be found in Andalusia. Spanning the Mediterranean and Atlantic basins, and adjacent to the Strait of Gibraltar, Andalusia is on the migratory route of many of the numerous flocks of birds that travel annually from Europe to Africa and back. The Andalusian wetlands host a rich variety of birds. Some are of African origin, such as the red-knobbed coot (Fulica cristata), the purple swamphen (Porphyrio porphyrio), and the greater flamingo (Phoenicopterus roseus). Others originate in Northern Europe, such as the greylag goose (Anser anser). Birds of prey (raptors) include the Spanish imperial eagle (Aquila adalberti), the griffon vulture (Gyps fulvus), and both the black and red kite (Milvus migrans and Milvus milvus). Among the herbivores, are several deer (Cervidae) species, notably the fallow deer (Dama dama) and roe deer (Capreolus capreolus); the European mouflon (Ovis aries musimon), a feral sheep; and the Spanish ibex (Capra pyrenaica, which despite its scientific name is no longer found in the Pyrenees). The Spanish ibex has recently been losing ground to the Barbary sheep (Ammotragus lervia), an invasive species from Africa, introduced for hunting in the 1970s. Among the small herbivores are rabbits—especially the European rabbit (Oryctolagus cuniculus)—which form the most important part of the diet of the carnivorous species of the Mediterranean woodlands. The large carnivores such as the Iberian wolf (Canis lupus signatus) and the Iberian lynx (Lynx pardinus) are quite threatened, and are limited to the Sierra de Andújar, inside of Sierra Morena, Doñana and Despeñaperros. Stocks of the wild boar (Sus scrofa), on the other hand, have been well preserved because they are popular with hunters. More abundant and in varied situations of conservation are such smaller carnivores as otters, dogs, foxes, the European badger (Meles meles), the European polecat (Mustela putorius), the least weasel (Mustela nivalis), the European wildcat (Felis silvestris), the common genet (Genetta genetta), and the Egyptian mongoose (Herpestes ichneumon). Other notable species are Acherontia atropos (a variety of death's-head hawkmoth), Vipera latasti (a venomous snake), and the endemic (and endangered) fish Aphanius baeticus. Protected areas Andalusia has many unique ecosystems. In order to preserve these areas in a manner compatible with both conservation and economic exploitation, many of the most representative ecosystems have been given protected status. The various levels of protection are encompassed within the Network of Protected Natural Spaces of Andalusia (Red de Espacios Naturales Protegidos de Andalucía, RENPA) which integrates all protected natural spaces located in Andalusia, whether they are protected at the level of the local community, the autonomous community of Andalusia, the Spanish state, or by international conventions. RENPA consists of 150 protected spaces, consisting of two national parks, 24 natural parks, 21 periurban parks (on the fringes of cities or towns), 32 natural sites, two protected countrysides, 37 natural monuments, 28 nature reserves, and four concerted nature reserves (in which a government agency coordinates with the owner of the property for its management), all part of the European Union's Natura 2000 network. Under the international ambit are the nine Biosphere Reserves, 20 Ramsar wetland sites, four Specially Protected Areas of Mediterranean Importance and two UNESCO Geoparks. In total, nearly 20 percent of the territory of Andalusia lies in one of these protected areas, which constitute roughly 30 percent of the protected territory of Spain. Among these many spaces, some of the most notable are the Sierras de Cazorla, Segura y Las Villas Natural Park, Spain's largest natural park and the second largest in Europe, the Sierra Nevada National Park, Doñana National Park and Natural Park, the Tabernas Desert, and the Cabo de Gata-Níjar Natural Park, the largest terrestrial-maritime reserve in the European Western Mediterranean Sea. History The geostrategic position of Andalusia in the extreme south of Europe, providing (along with Morocco) a gateway between Europe and Africa, added to its position between the Atlantic Ocean and the Mediterranean Sea, as well as its rich deposits of minerals and its agricultural wealth, have made Andalusia a tempting prize for civilizations since prehistoric times. Add to this its area of (larger than many European countries), and it can be no surprise that Andalusia has figured prominently in the history of Europe and the Mediterranean. Several theories postulate that the first hominids in Europe were in Andalusia, having passed across the Strait of Gibraltar; the earliest known paintings of humanity have been found in the Caves of Nerja, Málaga. The first settlers, based on artifacts from the archaeological sites at Los Millares, El Argar, and Tartessos, were clearly influenced by cultures of the Eastern Mediterranean who arrived on the Andalusian coast. Andalusia then went through a period of protohistory, when the region did not have a written language of its own, but its existence was known to and documented by literate cultures, principally the Phoenicians and Ancient Greeks, wide historical moment in which Cádiz was founded, regarded by many as the oldest city still standing in Western Europe; another city among the oldest is Málaga. During the second millennium BCE, the kingdom of Tartessos developed in Andalusia. Carthaginians and Romans With the fall of the original Phoenician cities in the East, Carthage – itself the most significant Phoenician colony – became the dominant sea power of the western Mediterranean and the most important trading partner for the Phoenician towns along the Andalusian coast. Between the First and Second Punic Wars, Carthage extended its control beyond Andalucia to include all of Iberia except the Basque Country. Some of the more prominent Andalusian cities during Carthaginian rule include Gadir (Cadiz), Qart Juba (Córdoba), Ilipa (near modern Seville), Malaka (Málaga) and Sexi or Seksi (near modern Almuñécar). Andalusia was the major staging ground for the war with Rome led by the Carthaginian general Hannibal. The Romans defeated the Carthaginians and conquered Andalusia, the region being renamed Baetica. It was fully incorporated into the Roman Empire, and from this region came many Roman magistrates and senators, as well as the emperors Trajan and (most likely) Hadrian. Vandals, Visigoths and the Byzantine Empire The Vandals moved briefly through the region during the 5th century AD before settling in North Africa, after which the region fell into the hands of the Visigothic Kingdom. The Visigoths in this region were practically independent of the Visigothic Catholic Kingdom of Toledo. This is the era of Saints Isidore of Seville and Hermenegild. During this period, around 555 AD, the Eastern Roman Empire conquered Andalusia under Justinian I, the Eastern Roman emperor. They established Spania, a province of the Byzantine Empire from 552 until 624. Although their holdings were quickly reduced, they continued to have interests in the region until it was lost altogether in 624. Al-Andalus states The Visigothic era came to an abrupt end in 711 with the Umayyad conquest of Hispania by the Muslim Umayyad general Tariq ibn Ziyad. Tariq is known in Umayyad history and legend as a formidable conqueror who burned his fleet of ships when he landed with his troops on the coast of Gibraltar – an acronym of "Jabel alTariq" meaning "the mountain of Tariq". When the Muslim invaders seized control and consolidated their dominion of the region, they remained tolerant of the local faiths, but they also needed a place for their own faith. In the 750s, they forcibly rented half of Córdoba 's Cathedral of San Vicente (Visigothic) to use as a mosque. The mosque's hypostyle plan, consisting of a rectangular prayer hall and an enclosed courtyard, followed a tradition established in the Umayyad and Abbasid mosques of Syria and Iraq while the dramatic articulation of the interior of the prayer hall was unprecedented. The system of columns supporting double arcades of piers and arches with alternating red and white voussoirs is an unusual treatment that, structurally, combined striking visual effect with the practical advantage of providing greater height within the hall. Alternating red and white voussoirs are associated with Umayyad monuments such as the Great Mosque of Damascus and the Dome of the Rock. Their use in the Great Mosque of Córdoba manages to create a stunningly original visual composition even as it emphasises 'Abd al-Rahman's connection to the established Umayyad tradition. In this period, the name "Al-Andalus" was applied to the Iberian Peninsula, and later it referred to the parts not controlled by the Gothic states in the North. The Muslim rulers in Al-Andalus were economic invaders and interested in collecting taxes; social changes imposed on the native populace were mainly confined to geographical, political and legal conveniences. Al-Andalus remained connected to other states under Muslim rule; also trade routes between it and Constantinople and Alexandria remained open, while many cultural features of the Roman Empire were transmitted throughout Europe and the Near East by its successor state, the Byzantine Empire. Byzantine architecture is an example of such cultural diffusion continuing even after the collapse of the empire. Nevertheless, the Guadalquivir River valley became the point of power projection in the peninsula with the Caliphate of Córdoba making Córdoba its capital. The Umayyad Caliphate produced such leaders as Caliph Abd-ar-Rahman III (ruled 912–961) and his son, Caliph Al-Hakam II (ruled 961–976) and built the magnificent Great Mosque of Córdoba. Under these rulers, Córdoba was the center of economic and cultural significance. By the 10th century, the northern Kingdoms of Spain and other European Crowns had begun what would eventually become the Reconquista: the reconquest of the Iberian Peninsula for Christendom. Caliph Abd-ar-Rahman suffered some minor military defeats, but often managed to manipulate the Gothic northern kingdoms to act against each other's interests. Al-Hakam achieved military successes, but at the expense of uniting the north against him. In the 10th century the Saracen rulers of Andalusia had a Slavic army of 13,750 men. After the conquest of Toledo in 1086 by Alfonso VI, the Crown of Castille and the Crown of Aragon dominated large parts of the peninsula. The main Taifas therefore had to resort to assistance from various other powers across the Mediterranean. A number of different Muslim dynasties of North African origin—notably Almoravid dynasty and Almohad dynasty—dominated a slowly diminishing Al-Andalus over the next several centuries. After the victory at the Battle of Sagrajas (1086) put a temporary stop to Castilian expansion, the Almoravid dynasty reunified Al-Andalus with its capital in Córdoba, ruling until the mid-12th century. The various Taifa kingdoms were assimilated. the Almohad dynasty expansion in North Africa weakened Al-Andalus, and in 1170 the Almohads transferred their capital from Marrakesh to Seville. The victory at the Battle of Las Navas de Tolosa (1212) marked the beginning of the end of the Almohad dynasty. Crown of Castile The weakness caused by the collapse of Almohad power and the subsequent creation of new Taifas, each with its own ruler, led to the rapid Castile reconquest of the valley of the Guadalquivir. Córdoba was regained in 1236 and Seville in 1248. The fall of Granada on 2 January 1492 put an end to the Nasrid rule; an event that marks the beginning of Andalusia, the southern four territories of the Crown of Castile in the Iberian Peninsula. Seven months later, on 3 August 1492 Christopher Columbus left the town of Palos de la Frontera, Huelva, with the first expedition that resulted in the Discovery of the Americas, that would end the Middle Ages and signal the beginning of modernity. Many Castilians participated in this and other expeditions that followed, some of them known as the Minor or Andalusian Journeys. Contacts between Spain and the Americas, including royal administration and the shipping trade from Asia and America for over three hundred years, came almost exclusively through the south of Spain, specially Seville and Cadiz ports. As a result, it became the wealthiest, most influential region in Spain and amongst the most influential in Europe. For example, the Habsburg diverted much of this trade wealth to control its European territories. Habsburg Spain In the first half of the 16th century plague was still prevalent in Spain. According to George C. Kohn, "One of the worst epidemics of the century, whose miseries were accompanied by severe drought and food shortage, started in 1505; by 1507, about 100,000 people had died in Andalusia alone. Andalusia was struck once again in 1646. For three years, plague haunted the entire region, causing perhaps as many as 200,000 deaths, especially in Málaga and Seville." A second insurrection, the Morisco Revolt (1568–1571), ensued in the Kingdom of Granada. It was crushed and the demographics of the kingdom of Granada was hammered, with the Morisco population decreasing in number by more than 100,000 including deaths, flights and deportations, contrasting with the less than 40,000 number of incoming settlers. In 1810–12 Spanish troops strongly resisted the French occupation during the Peninsular War (part of the Napoleonic Wars). Andalusia profited from the Spanish overseas empire, although much trade and finance eventually came to be controlled by other parts of Europe to where it was ultimately destined. In the 18th century, commerce from other parts of Spain began to displace Andalusian commerce when the Spanish government ended Andalusia's trading monopoly with the colonies in the Americas. The loss of the empire in the 1820s hurt the economy of the region, particularly the cities that had benefited from the trade and ship building. The construction of railways in the latter part of the 19th century enabled Andalusia to better develop its agricultural potential and it became an exporter of food. While industrialisation was taking off in the northern Spanish regions of Catalonia and the Basque country, Andalusia remained traditional and displayed a deep social division between a small class of wealthy landowners and a population made up largely of poor agricultural labourers and tradesmen. Francoist oppressions Andalusia was one of the worst affected regions of Spain by Francisco Franco's brutal campaign of mass-murder and political suppression called the White Terror during and after the Spanish Civil War. The Nationalist rebels bombed and seized the working-class districts of the main Andalusian cities in the first days of the war, and afterwards went on to execute thousands of workers and militants of the leftist parties: in the city of Córdoba 4,000; in the city of Granada 5,000; in the city of Seville 3,028; and in the city of Huelva 2,000 killed and 2,500 disappeared. The city of Málaga, occupied by the Nationalists in February 1937 following the Battle of Málaga, experienced one of the harshest repressions following Francoist victory with an estimated total of 17,000 people summarily executed. Carlos Arias Navarro, then a young lawyer who as public prosecutor signed thousands of execution warrants in the trials set up by the triumphant rightists, became known as "The Butcher of Málaga" (Carnicero de Málaga). Paul Preston estimates the total number of victims of deliberately killed by the Nationalists in Andalusia at 55,000. Government and politics Andalusia is one of the 17 autonomous communities of Spain. The Regional Government of Andalusia (Spanish: Junta de Andalucía) includes the Parliament of Andalusia, its chosen president, a Consultative Council, and other bodies. The Autonomous Community of Andalusia was formed in accord with a referendum of 28 February 1980 and became an autonomous community under the 1981 Statute of Autonomy known as the Estatuto de Carmona. The process followed the Spanish Constitution of 1978, still current as of 2009, which recognizes and guarantees the right of autonomy for the various regions and nationalities of Spain. The process to establish Andalusia as an autonomous region followed Article 151 of the Constitution, making Andalusia the only autonomous community to take that particular course. That article was set out for regions like Andalusia that had been prevented by the outbreak of the Spanish Civil War from adopting a statute of autonomy during the period of the Second Spanish Republic. Article 1 of the 1981 Statute of Autonomy justifies autonomy based on the region's "historical identity, on the self-government that the Constitution permits every nationality, on outright equality to the rest of the nationalities and regions that compose Spain, and with a power that emanates from the Andalusian Constitution and people, reflected in its Statute of Autonomy". In October 2006 the constitutional commission of the Cortes Generales (the national legislature of Spain), with favorable votes from the left-of-center Spanish Socialist Workers' Party (PSOE), the leftist United Left (IU) and the right-of-center People's Party (PP), approved a new Statute of Autonomy for Andalusia, whose preamble refers to the community as a "national reality" (realidad nacional): On 2 November 2006 the Spanish Chamber Deputies ratified the text of the Constitutional Commission with 306 votes in favor, none opposed, and 2 abstentions. This was the first time a Spanish Organic Law adopting a Statute of Autonomy was approved with no opposing votes. The Senate, in a plenary session of 20 December 2006, ratified the referendum to be voted upon by the Andalusian public 18 February 2007. The Statute of Autonomy spells out Andalusia's distinct institutions of government and administration. Chief among these is the Andalusian Autonomous Government (Junta de Andalucía). Other institutions specified in the Statute are the Defensor del Pueblo Andaluz (literally "Defender of the Andalusian People", basically an ombudsperson), the Consultative Council, the Chamber of Accounts, the Audiovisual Council of Andalusia, and the Economic and Social Council. The Andalusian Statute of Autonomy recognizes Seville as the autonomy's capital. The Andalusian Autonomous Government is located there. The region's highest court, the High Court of Andalusia (Tribunal Superior de Justicia de Andalucía) is not part of the Autonomous Government, and has its seat in Granada. Autonomous Government The Andalusian Autonomous Government (Junta de Andalucía) is the institution of self-government of the Autonomous Community of Andalusia. Within the government, the President of Andalusia is the supreme representative of the autonomous community, and the ordinary representative of the Spanish state in the autonomous community. The president is formally named to the position by the Monarch of Spain and then confirmed by a majority vote of the Parliament of Andalusia. In practice, the monarch always names a person acceptable to the ruling party or coalition of parties in the autonomous region. In theory, were the candidate to fail to gain the needed majority, the monarch could propose a succession of candidates. After two months, if no proposed candidate could gain the parliament's approval, the parliament would automatically be dissolved and the acting president would call new elections. On 18 January 2019 Juan Manuel Moreno was elected as the sixth president of Andalusia. The Council of Government, the highest political and administrative organ of the Community, exercises regulatory and executive power. The President presides over the council, which also includes the heads of various departments (Consejerías). In the current legislature (2008–2012), there are 15 of these departments. In order of precedence, they are Presidency, Governance, Economy and Treasury, Education, Justice and Public Administration, Innovation, Science and Business, Public Works and Transportation, Employment, Health, Agriculture and Fishing, Housing and Territorial Planning, Tourism, Commerce and Sports, Equality and Social Welfare, Culture, and Environment. The Parliament of Andalusia, its Autonomic Legislative Assembly, develops and approves laws and elects and removes the President. Elections to the Andalusian Parliament follow a democratic formula through which the citizens elect 109 representatives. After the approval of the Statute of Autonomy through Organic Law 6/1981 on 20 December 1981, the first elections to the autonomic parliament took place 23 May 1982. Further elections have occurred in 1986, 1990, 1994, 1996, 2000, 2004, and 2008. The current (2008–2012) legislature includes representatives of the PSOE-A (Andalusian branch of the left-of-center PSOE), PP-A (Andalusian branch of the right-of-center PP) and IULV-CA (Andalusian branch of the leftist IU). Judicial power The High Court of Andalusia (Tribunal Superior de Justicia de Andalucía) in Granada is subject only to the higher jurisdiction of the Supreme Court of Spain. The High Court is not an organ of the Autonomous Community, but rather of the Judiciary of Spain, which is unitary throughout the kingdom and whose powers are not transferred to the autonomous communities. The Andalusian territory is divided into 88 legal/judicial districts (partidos judiciales). Administrative divisions Provinces Andalusia consists of eight provinces. The latter were established by Javier de Burgos in the 1833 territorial division of Spain. Each of the Andalusian provinces bears the same name as its capital: Andalusia is traditionally divided into two historical subregions: or (Andalucía Oriental), consisting of the provinces of Almería, Granada, Jaén, and Málaga, and or (Andalucía Occidental), consisting of the provinces of Cádiz, Córdoba, Huelva and Seville. Comarcas and mancomunidades Within the various autonomous communities of Spain, comarcas are comparable to shires (or, in some countries, counties) in the English-speaking world. Unlike in some of Spain's other autonomous communities, under the original 1981 Statute of Autonomy, the comarcas of Andalusia had no formal recognition, but, in practice, they still had informal recognition as geographic, cultural, historical, or in some cases administrative entities. The 2007 Statute of Autonomy echoes this practice, and mentions comarcas in Article 97 of Title III, which defines the significance of comarcas and establishes a basis for formal recognition in future legislation. The current statutory entity that most closely resembles a comarca is the , a freely chosen, bottom-up association of municipalities intended as an instrument of socioeconomic development and coordination between municipal governments in specific areas. Municipalities and local entities Beyond the level of provinces, Andalusia is further divided into 774 municipalities (municipios). The municipalities of Andalusia are regulated by Title III of the Statute of Autonomy, Articles 91–95, which establishes the municipality as the basic territorial entity of Andalusia, each of which has legal personhood and autonomy in many aspects of its internal affairs. At the municipal level, representation, government and administration is performed by the ayuntamiento (municipal government), which has competency for urban planning, community social services, supply and treatment of water, collection and treatment of waste, and promotion of tourism, culture, and sports, among other matters established by law. Among the more important Andalusian cities besides the provincial capitals are: El Ejido, Níjar and Roquetas de Mar (Almería) La Línea de la Concepción, Algeciras, Sanlúcar de Barrameda, San Fernando, Chiclana de la Frontera, Puerto Real, Arcos de la Frontera, Jerez and El Puerto de Santa María (Cádiz) Lucena, Pozoblanco, Montilla and Puente Genil (Córdoba) Almuñécar, Guadix, Loja and Motril (Granada) Linares, Andújar, Úbeda and Baeza (Jaén) Marbella, Mijas, Vélez-Málaga, Fuengirola, Torremolinos, Estepona, Benalmádena, Antequera, Rincón de la Victoria and Ronda (Málaga) Utrera, Dos Hermanas, Alcalá de Guadaíra, Osuna, Mairena del Aljarafe, Écija and Lebrija (Sevilla) In conformity with the intent to devolve control as locally as possible, in many cases, separate nuclei of population within municipal borders each administer their own interests. These are variously known as pedanías ("hamlets"), villas ("villages"), aldeas (also usually rendered as "villages"), or other similar names. Main cities Demographics Andalusia ranks first by population among the 17 autonomous communities of Spain. The estimated population at the beginning of 2009 was 8,285,692. The population is concentrated, above all, in the provincial capitals and along the coasts, so that the level of urbanization is quite high; half the population is concentrated in the 28 cities of more than 50,000 inhabitants. The population is aging, although the process of immigration is countering the inversion of the population pyramid. Population change At the end of the 20th century, Andalusia was in the last phase of demographic transition. The death rate stagnated at around 8–9 per thousand, and the population came to be influenced mainly by birth and migration. In 1950, Andalusia had 20.04 percent of the national population of Spain. By 1981, this had declined to 17.09 percent. Although the Andalusian population was not declining in absolute terms, these relative losses were due to emigration great enough to nearly counterbalance having the highest birth rate in Spain. Since the 1980s, this process has reversed on all counts, and as of 2009, Andalusia has 17.82 percent of the Spanish population. The birth rate is sharply down, as is typical in developed economies, although it has lagged behind much of the rest of the world in this respect. Furthermore, prior emigrants have been returning to Andalusia. Beginning in the 1990s, others have been immigrating in large numbers as well, as Spain has become a country of net immigration. At the beginning of the 21st century, statistics show a slight increase in the birth rate, due in large part to the higher birth rate among immigrants. The result is that as of 2009, the trend toward rejuvenation of the population is among the strongest of any autonomous community of Spain, or of any comparable region in Europe. Structure At the beginning of the 21st century, the population structure of Andalusia shows a clear inversion of the population pyramid, with the largest cohorts falling between ages 25 and 50. Comparison of the population pyramid in 2008 to that in 1986 shows: A clear decrease in the population under the age of 25, due to a declining birth rate. An increase in the adult population, as the earlier, larger cohort born in the "baby boom" of the 1960s and 1970s reach adulthood. This effect has been exacerbated by immigration: the largest contingent of immigrants are young adults. A further increase in the adult population, and especially the older adult population, due to increased life expectancy. As far as composition by sex, two aspects stand out: the higher percentage of women in the elderly population, owing to women's longer life expectancy, and, on the other hand, the higher percentage of men of working age, due in large part to a predominantly male immigrant population. Immigration In 2005, 5.35 percent of the population of Andalusia were born outside of Spain. This is a relatively low number for a Spanish region, the national average being three percentage points higher. The immigrants are not evenly distributed among the Andalusian provinces: Almería, with a 15.20 percent immigrant population, is third among all provinces in Spain, while at the other extreme Jaén is only 2.07 percent immigrants and Córdoba 1.77 percent. The predominant nationalities among the immigrant populations are Moroccan (92,500, constituting 17.79 percent of the foreigners living in Andalusia) and British (15.25 percent across the region). When comparing world regions rather than individual countries, the single largest immigrant block is from the region of Latin America, outnumbering not only all North Africans, but also all non-Spanish Western Europeans. Demographically, this group has provided an important addition to the Andalusian labor force. Economy Andalusia is traditionally an agricultural area, but the service sector (particularly tourism, retail sales, and transportation) now predominates. The once booming construction sector, hit hard by the 2009 recession, was also important to the region's economy. The industrial sector is less developed than most other regions in Spain. Between 2000 and 2006 economic growth per annum was 3.72%, one of the highest in the country. Still, according to the Spanish (INE), the GDP per capita of Andalusia (€17,401; 2006) remains the second lowest in Spain, with only Extremadura lagging behind. The Gross domestic product (GDP) of the autonomous community was 160.6 billion euros in 2018, accounting for 13.4% of Spanish economic output. GDP per capita adjusted for purchasing power was 20,500 euros or 68% of the EU27 average in the same year. Primary sector The primary sector, despite adding the least of the three sectors to the regional GDP remains important, especially when compared to typical developed economies. The primary sector produces 8.26 percent of regional GDP, 6.4 percent of its GVA and employs 8.19 percent of the workforce. In monetary terms it could be considered a rather uncompetitive sector, given its level of productivity compared to other Spanish regions. In addition to its numeric importance relative to other regions, agriculture and other primary sector activities have strong roots in local culture and identity. The primary sector is divided into a number of subsectors: agriculture, commercial fishing, animal husbandry, hunting, forestry, mining, and energy. Agriculture, husbandry, hunting, and forestry For many centuries, agriculture dominated Andalusian society, and, with 44.3 percent of its territory cultivated and 8.4 percent of its workforce in agriculture as of 2016 it remains an integral part of Andalusia's economy. However, its importance is declining, like the primary and secondary sectors generally, as the service sector is increasingly taking over. The primary cultivation is dryland farming of cereals and sunflowers without artificial irrigation, especially in the vast countryside of the Guadalquivir valley and the high plains of Granada and Almería-with a considerably lesser and more geographically focused cultivation of barley and oats. Using irrigation, maize, cotton and rice are also grown on the banks of the Guadalquivir and Genil. The most important tree crops are olives, especially in the Subbetic regions of the provinces of Córdoba and Jáen, where irrigated olive orchards constitute a large component of agricultural output. There are extensive vineyards in various zones such as Jerez de la Frontera (sherry), Condado de Huelva, Montilla-Moriles and Málaga. Fruits—mainly citrus fruits—are grown near the banks of the Guadalquivir; almonds, which require far less water, are grown on the high plains of Granada and Almería. In monetary terms, by far the most productive and competitive agriculture in Andalusia is the intensive forced cultivation of strawberries, raspberries, blueberries, and other fruits grown under hothouse conditions under clear plastic, often in sandy zones, on the coasts, in Almería and Huelva. Organic farming has recently undergone rapid expansion in Andalusia, mainly for export to European markets but with increasing demand developing in Spain. Andalusia has a long tradition of animal husbandry and livestock farming, but it is now restricted mainly to mountain meadows, where there is less pressure from other potential uses. Andalusians have a long and colourful history of dog breeding that can be observed throughout the region today. The raising of livestock now plays a semi-marginal role in the Andalusian economy, constituting only 15 percent of the primary sector, half the number for Spain taken as a whole. "Extensive" raising of livestock grazes the animals on natural or cultivated pastures, whereas "intensive" raising of livestock is based in fodder rather than pasture. Although the productivity is higher than with extensive techniques, the economics are quite different. While intensive techniques now dominate in Europe and even in other regions of Spain, most of Andalusia's cattle, virtually all of its sheep and goats, and a good portion of its pigs are raised by extensive farming in mountain pastures. This includes the Black Iberian pigs that are the source of Jamón ibérico. Andalusia's native sheep and goats present a great economic opportunity in a Europe where animal products are generally in strong supply, but the sheep and goat meat, milk, and leather (and the products derived from these) are relatively scarce. Dogs are bred not just as companion animals, but also as herding animals used by goat and sheep herders. Hunting remains relatively important in Andalusia, but has largely lost its character as a means of obtaining food. It is now more of a leisure activity linked to the mountain areas and complementary to forestry and the raising of livestock. Dogs are frequently used as hunting companions to retrieve killed game. The Andalusian forests are important for their extent—50 percent of the territory of Andalusia—and for other less quantifiable environmental reasons, such as their value in preventing erosion, regulating the flow of water necessary for other flora and fauna. For these reasons, there is legislation in place to protect the Andalusian forests. The value of forest products as such constitutes only 2 percent of agricultural production. This comes mostly from cultivated species—eucalyptus in Huelva and poplar in Granada—as well as naturally occurring cork oak in the Sierra Morena. Fishing Fishing is a longstanding tradition on the Andalusian coasts. Fish and other seafood have long figured prominently in the local diet and in the local gastronomic culture: fried fish (pescaito frito in local dialect), white prawns, almadraba tuna, among others. The Andalusian fishing fleet is Spain's second largest, after Galicia, and Andalusia's 38 fishing ports are the most of any Spanish autonomous community. Commercial fishing produces only 0.5 percent of the product of the regional primary sector by value, but there are areas where it has far greater importance. In the province of Huelva it constitutes 20 percent of the primary sector, and locally in Punta Umbría 70 percent of the work force is involved in commercial fishing. Failure to comply with fisheries laws regarding the use of trawling, urban pollution of the seacoast, destruction of habitats by coastal construction (for example, alteration of the mouths of rivers, construction of ports), and diminution of fisheries by overexploitation have created a permanent crisis in the Andalusian fisheries, justifying attempts to convert the fishing fleet. The decrease in fish stocks has led to the rise of aquaculture, including fish farming both on the coasts and in the interior. Mining Despite the general poor returns in recent years, mining retains a certain importance in Andalusia. Andalusia produces half of Spain's mining product by value. Of Andalusia's production, roughly half comes from the province of Huelva. Mining for precious metals at Minas de Riotinto in Huelva (see Rio Tinto Group) dates back to pre-Roman times; the mines were abandoned in the Middle Ages and rediscovered in 1556. Other mining activity is coal mining in the Guadiato valley in the province of Córdoba; various metals at Aznalcóllar in the province of Seville, and iron at Alquife in the province of Granada. In addition, limestone, clay, and other materials used in construction are well distributed throughout Andalusia. Secondary sector: industry The Andalusian industrial sector has always been relatively small. Nevertheless, in 2007, Andalusian industry earned 11.979 million euros and employed more than 290,000 workers. This represented 9.15 percent of regional GDP, far below the 15.08 the secondary sector represents in the economy of Spain as a whole. By analyzing the different subsectors of the food industry Andalusian industry accounts for more than 16% of total production. In a comparison with the Spanish economy, this subsector is virtually the only food that has some weight in the national economy with 16.16%. Lies far behind the manufacturing sector of shipping materials just over 10% of the Spanish economy. Companies like Cruzcampo (Heineken Group), Puleva, Domecq, Santana Motors or Renault-Andalusia, are exponents of these two subsectors. Of note is the Andalusian aeronautical sector, which is second nationally only behind Madrid and represents approximately 21% of total turnover in terms of employment, highlighting companies like Airbus, Airbus Military, or the newly formed Aerospace Alestis. On the contrary it is symptomatic of how little weight the regional economy in such important sectors such as textiles or electronics at the national level. Andalusian industry is also characterized by a specialization in industrial activities of transforming raw agricultural and mineral materials. This is largely done by small enterprises without the public or foreign investment more typical of a high level of industrialization. Tertiary sector: services In recent decades the Andalusian tertiary (service) sector has grown greatly, and has come to constitute the majority of the regional economy, as is typical of contemporary economies in developed nations. In 1975 the service sector produced 51.1 percent of local GDP and employed 40.8 percent of the work force. In 2007, this had risen to 67.9 percent of GDP and 66.42 percent of jobs. This process of "tertiarization" of the economy has followed a somewhat unusual course in Andalusia. This growth occurred somewhat earlier than in most developed economies and occurred independently of the local industrial sector. There were two principal reasons that "tertiarization" followed a different course in Andalusia than elsewhere: 1. Andalusian capital found it impossible to compete in the industrial sector against more developed regions, and was obligated to invest in sectors that were easier to enter. 2. The absence of an industrial sector that could absorb displaced agricultural workers and artisans led to the proliferation of services with rather low productivity. This unequal development compared to other regions led to a hypertrophied and unproductive service sector, which has tended to reinforce underdevelopment, because it has not led to large accumulations of capital. Tourism in Andalusia Due in part to the relatively mild winter and spring climate, the south of Spain is attractive to overseas visitors–especially tourists from Northern Europe. While inland areas such as Jaén, Córdoba and the hill villages and towns remain relatively untouched by tourism, the coastal areas of Andalusia have heavy visitor traffic for much of the year. Among the autonomous communities, Andalusia is second only to Catalonia in tourism, with nearly 30 million visitors every year. The principal tourist destinations in Andalusia are the Costa del Sol and (secondarily) the Sierra Nevada. As discussed above, Andalusia is one of the sunniest and warmest places in Europe, making it a center of "sun and sand" tourism, but not only it. Around 70 percent of the lodging capacity and 75 percent of the nights booked in Andalusian hotels are in coastal municipalities. The largest number of tourists come in August—13.26 percent of the nights booked throughout the year—and the smallest number in December—5.36 percent. On the west (Atlantic) coast are the Costa de la Luz (provinces of Huelva and Cádiz), and on the east (Mediterranean) coast, the Costa del Sol (provinces of Cádiz y Málaga), Costa Tropical (Granada and part of Almería) and the Costa de Almería. In 2004, the Blue Flag beach program of the non-profit Foundation for Environmental Education recognized 66 Andalusian beaches and 18 pleasure craft ports as being in a good state of conservation in terms of sustainability, accessibility, and quality. Nonetheless, the level of tourism on the Andalusian coasts has been high enough to have a significant environmental impact, and other organizations—such as the Spanish Ecologists in Action (Ecologistas en Acción) with their description of "Black Flag beaches" or Greenpeace—have expressed the opposite sentiment. Still, Hotel chains such as Fuerte Hotels have ensured that sustainability within the tourism industry is one of their highest priorities. Together with "sand and sun" tourism, there has also been a strong increase in nature tourism in the interior, as well as cultural tourism, sport tourism, and conventions. One example of sport and nature tourism is the ski resort at Sierra Nevada National Park. As for cultural tourism, there are hundreds of cultural tourist destinations: cathedrals, castles, forts, monasteries, and historic city centers and a wide variety of museums. It can be highlighted that Spain has seven of its 42 cultural UNESCO World Heritage Sites in Andalucia: Alhambra, Generalife and Albayzín, Granada (1984,1994) Antequera Dolmens Site (2016) 10th Century Caliphate City of Medina Azahara (2018) Cathedral, Alcázar and Archivo de Indias in Seville (1987) Historic centre of Córdoba (1984,1994) Renaissance Monumental Ensembles of Úbeda and Baeza (2003) Rock Art of the Mediterranean Basin on the Iberian Peninsula (1998) Further, there are the Lugares colombinos, significant places in the life of Christopher Columbus: Palos de la Frontera, La Rábida Monastery, and Moguer) in the province of Huelva. There are also archeological sites of great interest: the Roman city of Italica, birthplace of Emperor Trajan and (most likely) Hadrian or Baelo Claudia near Tarifa. Andalusia was the birthplace of such great painters as Velázquez and Murillo (Seville) and, more recently, Picasso (Málaga); Picasso is memorialized by his native city at the Museo Picasso Málaga and Natal House Foundation; the Casa de Murillo was a house museum 1982–1998, but is now mostly offices for the Andalusian Council of Culture. The CAC Málaga (Museum of Modern Art) is the most visited museum of Andalusia and has offered exhibitions of artists such as Louise Bourgeois, Jake and Dinos Chapman, Gerhard Richter, Anish Kapoor, Ron Mueck or Rodney Graham. Malaga is also located part of the private Carmen Thyssen-Bornemisza Collection at Carmen Thyssen Museum. There are numerous other significant museums around the region, both of paintings and of archeological artifacts such as gold jewelry, pottery and other ceramics, and other works that demonstrate the region's artisanal traditions. The Council of Government has designated the following "Municipios Turísticos": in Almería, Roquetas de Mar; in Cádiz, Chiclana de la Frontera, Chipiona, Conil de la Frontera, Grazalema, Rota, and Tarifa; in Granada, Almuñécar; in Huelva, Aracena; in Jaén, Cazorla; in Málaga, Benalmádena, Fuengirola, Nerja, Rincón de la Victoria, Ronda, and Torremolinos; in Seville, Santiponce. Monuments and features Alcazaba, Almería Cueva de Menga, Antequera (Málaga) El Torcal, Antequera (Málaga) Medina Azahara, Córdoba Mosque–Cathedral, Córdoba Mudejar Quarter, Frigiliana (Málaga) Alhambra, Granada Palace of Charles V, Granada Charterhouse, Granada Albayzín, Granada La Rabida Monastery, Palos de la Frontera (Huelva) Castle of Santa Catalina, Jaén Jaén Cathedral, Jaén Úbeda and Baeza, Jaén Alcazaba, Málaga Buenavista Palace, Málaga Málaga Cathedral, Málaga Puente Nuevo, Ronda (Málaga) Caves of Nerja, Nerja (Málaga) Ronda Bullring, Ronda (Málaga) Giralda, Seville Torre del Oro, Seville Plaza de España, Seville Seville Cathedral, Seville Alcázar of Seville, Seville Unemployment The unemployment rate stood at 25.5% in 2017 and was one of the highest in Spain and Europe. Infrastructure Transport As in any modern society, transport systems are an essential structural element of the functioning of Andalusia. The transportation network facilitates territorial coordination, economic development and distribution, and intercity transportation. In urban transport, underdeveloped public transport systems put pedestrian traffic and other non-motorized traffic are at a disadvantage compared to the use of private vehicles. Several Andalusian capitals—Córdoba, Granada and Seville—have recently been trying to remedy this by strengthening their public transport systems and providing a better infrastructure for the use of bicycles. There are now three rapid transit systems operating in Andalucia – the Seville Metro, Málaga Metro and Granada Metro. Cercanías commuter rail networks operate in Seville, Málaga and Cádiz. For over a century, the conventional rail network has been centralized on the regional capital, Seville, and the national capital, Madrid; in general, there are no direct connections between provincial capitals. High-speed AVE trains run from Madrid via Córdoba to Seville and Málaga, from which a branch from Antequera to Granada opened in 2019. Further AVE routes are under construction. The Madrid-Córdoba-Seville route was the first high-velocity route in Spain (operating since 1992). Other principal routes are the one from Algeciras to Seville and from Almería via Granada to Madrid. Most of the principal roads have been converted into limited access highways known as autovías. The Autovía del Este (Autovía A-4) runs from Madrid through the Despeñaperros Natural Park, then via Bailén, Córdoba, and Seville to Cádiz, and is part of European route E05 in the International E-road network. The other main road in the region is the portion of European route E15, which runs as the Autovia del Mediterráneo along the Spanish Mediterranean coast. Parts of this constitute the superhighway Autopista AP-7, while in other areas it is Autovía A-7. Both of these roads run generally east–west, although the Autovía A-4 turns to the south in western Andalusia. Other first-order roads include the Autovía A-48 roughly along the Atlantic coast from Cádiz to Algeciras, continuing European route E05 to meet up with European route E15; the Autovía del Quinto Centenario (Autovía A-49), which continues west from Seville (where the Autovía A-4 turns toward the south) and goes on to Huelva and into Portugal as European route E01; the Autovía Ruta de la Plata (Autovía A-66), European route E803, which roughly corresponds to the ancient Roman 'Silver Route' from the mines of northern Spain, and runs north from Seville; the Autovía de Málaga (Autovía A-45), which runs south from Córdoba to Málaga; and the Autovía de Sierra Nevada (Autovía A-44), part of European route E902, which runs south from Jaén to the Mediterranean coast at Motril. As of 2008 Andalusia has six public airports, all of which can legally handle international flights. The Málaga Airport is dominant, handling 60.67 percent of passengers and 85 percent of its international traffic. The Seville Airport handles another 20.12 percent of traffic, and the Jerez Airport 7.17 percent, so that these three airports account for 87.96 percent of traffic. Málaga Airport is the international airport that offers a wide variety of international destinations. It has a daily link with twenty cities in Spain and over a hundred cities in Europe (mainly in Great Britain, Central Europe and the Nordic countries but also the main cities of Eastern Europe: Moscow, Saint Petersburg, Sofia, Riga or Bucharest), North Africa, Middle East (Riyadh, Jeddah and Kuwait) and North America (New York, Toronto and Montreal). The main ports are Algeciras (for freight and container traffic) and Málaga for cruise ships. Algeciras is Spain's leading commercial port, with of cargo in 2004. Seville has Spain's only commercial river port. Other significant commercial ports in Andalusia are the ports of the Bay of Cádiz, Almería and Huelva. The Council of Government has approved a Plan of Infrastructures for the Sustainability of Transport in Andalusia (PISTA) 2007–2013, which plans an investment of 30 billion euros during that period. Energy infrastructure The lack of high-quality fossil fuels in Andalusia has led to a strong dependency on petroleum imports. Still, Andalusia has a strong potential for the development of renewable energy, above all wind energy. The Andalusian Energy Agency established in 2005 by the autonomous government, is a new governmental organ charged with the development of energy policy and provision of a sufficient supply of energy for the community. The infrastructure for production of electricity consists of eight large thermal power stations, more than 70 hydroelectric power plants, two wind farms, and 14 major cogeneration facilities. Historically, the largest Andalusian business in this sector was the Compañía Sevillana de Electricidad, founded in 1894, absorbed into Endesa in 1996. The Solar power tower PS10 was built by the Andalusian firm Abengoa in Sanlúcar la Mayor in the province of Seville, and began operating in March 2007. It is the largest existing solar power facility in Europe. Smaller solar power stations, also recent, exist at Cúllar and Galera, Granada, inaugurated by Geosol and Caja Granada. Two more large thermosolar facilities, Andasol I y II, planned at Hoya de Guadix in the province of Granada are expected to supply electricity to half a million households. The Plataforma Solar de Almería (PSA) in the Tabernas Desert is an important center for the exploration of the solar energy. The largest wind power firm in the region is the Sociedad Eólica de Andalucía, formed by the merger of Planta Eólica del Sur S.A. and Energía Eólica del Estrecho S.A. The Medgaz gas pipeline directly connects the Algerian town of Béni Saf to Almería. Education As throughout Spain, basic education in Andalusia is free and compulsory. Students are required to complete ten years of schooling, and may not leave school before the age of 16, after which students may continue on to a baccalaureate, to intermediate vocational education, to intermediate-level schooling in arts and design, to intermediate sports studies, or to the working world. Andalusia has a tradition of higher education dating back to the Modern Age and the University of Granada, University of Baeza, and University of Osuna. there were ten private or public universities in Andalusia. University studies are structured in cycles, awarding degrees based on ECTS credits in accord with the Bologna process, which the Andalusian universities are adopting in accord with the other universities of the European Higher Education Area. Healthcare Responsibility for healthcare jurisdictions devolved from the Spanish government to Andalusia with the enactment of the Statute of Autonomy. Thus, the Andalusian Health Service (Servicio Andaluz de Salud) currently manages almost all public health resources of the Community, with such exceptions as health resources for prisoners and members of the military, which remain under central administration. Science and technology According to the Outreach Program for Science in Andalusia, Andalusia contributes 14 percent of Spain's scientific production behind only Madrid and Catalonia among the autonomous communities, even though regional investment in research and development (R&D) as a proportion of GDP is below the national average. The lack of research capacity in business and the low participation of the private sector in research has resulted in R&D taking place largely in the public sector. The Council of Innovation, Science and Business is the organ of the autonomous government responsible for universities, research, technological development, industry, and energy. The council coordinates and initiates scientific and technical innovation through specialized centers an initiatives such as the Andalusian Center for Marine Science and Technology (Centro Andaluz de Ciencia y Tecnología Marina) and Technological Corporation of Andalusia (Corporación Tecnológica de Andalucía). Within the private sphere, although also promoted by public administration, technology parks have been established throughout the Community, such as the Technological Park of Andalucia (Parque Tecnológico de Andalucía) in Campanillas on the outskirts of Málaga, and Cartuja 93 in Seville. Some of these parks specialize in specific sector, such as in aerospace or in food technology. The Andalusian government deployed 600,000 Ubuntu desktop computers in their schools. Media Andalusia has international, national, regional, and local media organizations, which are active gathering and disseminating information (as well as creating and disseminating entertainment). The most notable is the public Radio y Televisión de Andalucía (RTVA), broadcasting on two regional television channels, Canal Sur and Canal Sur 2, four regional radio stations, Canal Sur Radio, Canal Fiesta Radio, Radio Andalucía Información and Canal Flamenco Radio, as well as various digital signals, most notably Canal Sur Andalucía available on cable TV throughout Spain. Newspapers Different newspapers are published for each Andalusian provincial capital, comarca, or important city. Often, the same newspaper organization publishes different local editions with much shared content, with different mastheads and different local coverage. There are also popular papers distributed without charge, again typically with local editions that share much of their content. No single Andalusian newspaper is distributed throughout the region, not even with local editions. In eastern Andalusia the has editions tailored for the provinces of Almería, Granada, and Jaén. Grupo Joly is based in Andalucia, backed by Andalusian capital, and publishes eight daily newspapers there. Efforts to create a newspaper for the entire autonomous region have not succeeded (the most recent as of 2009 was the Diario de Andalucía). The national press (, El Mundo, ABC, etc.) include sections or editions specific to Andalusia. Public television Andalusia has two public television stations, both operated by Radio y Televisión de Andalucía (RTVA): Canal Sur first broadcast on 28 February 1989 (Andalusia Day). Canal Sur 2 first broadcast 5 June 1998. Programming focuses on culture, sports, and programs for children and youth. In addition, RTVA also operates the national and international cable channel Canal Sur Andalucía, which first broadcast in 1996 as Andalucía Televisión. Radio There are four public radio stations in the region, all operated by RTVA: , first broadcast October 1988. , first broadcast September 1998. , first broadcast January 2001. , first broadcast 29 September 2008. Art and culture The patrimony of Andalusia has been shaped by its particular history and geography, as well as its complex flows of population. Andalusia has been home to a succession of peoples and civilizations, many very different from one another, each impacting the settled inhabitants. The ancient Iberians were followed by Celts, Phoenicians and other Eastern Mediterranean traders, Romans, migrating Germanic tribes, Arabs or Berbers. All have shaped the Spanish patrimony in Andalusia, which was already diffused widely in the literary and pictorial genre of the costumbrismo andaluz. In the 19th century, Andalusian culture came to be widely viewed as the Spanish culture par excellence, in part thanks to the perceptions of romantic travellers. In the words of Ortega y Gasset: Arts Andalusia has been the birthplace of many great artists: the classic painters Velázquez, Murillo, and Juan de Valdés Leal; the sculptors Juan Martínez Montañés, Alonso Cano and Pedro de Mena; and such modern painters as Daniel Vázquez Díaz and Pablo Picasso. The Spanish composer Manuel de Falla was from Cádiz and incorporated typical Andalusian melodies in his works, as did Joaquín Turina, from Seville. The great singer Camarón de la Isla was born in San Fernando, Cádiz, and Andrés Segovia who helped shape the romantic-modernist approach to classical guitar, was born in Linares, Jaén. The virtuoso Flamenco guitar player Paco de Lucia who helped internationalize Flamenco, was born in Algeciras, Cadiz. Architecture Since the Neolithic era, Andalusia has preserved important megaliths, such as the dolmens at the Cueva de Menga and the Dolmen de Viera, both at Antequera. Archeologists have found Bronze Age cities at Los Millares and El Argar. Archeological digs at Doña Blanca in El Puerto de Santa María have revealed the oldest Phoenicians city in the Iberian peninsula; major ruins have also been revealed at Roman Italica near Seville. Some of the greatest architecture in Andalusia was developed across several centuries and civilizations, and the region is particularly famous for its Islamic and Moorish architecture, which includes the Alhambra complex, Generalife and the Mosque-Cathedral of Córdoba. The traditional architecture of Andalusia retains its Roman with Arab influences brought by Muslims, with a marked Mediterranean character strongly conditioned by the climate. Traditional urban houses are constructed with shared walls to minimize exposure to high exterior temperatures. Solid exterior walls are painted with lime to minimize the heating effects of the sun. In accord with the climate and tradition of each area, the roofs may be terraces or tiled in the Roman imbrex and tegula style. One of the most characteristic elements (and one of the most obviously influenced by Roman architecture) is the interior patio or courtyard; the patios of Córdoba are particularly famous. Other characteristic elements are decorative (and functional) wrought iron gratings and the tiles known as azulejos. Landscaping—both for common private homes and homes on a more lavish scale—also carries on older traditions, with plants, flowers, and fountains, pools, and streams of water. Beyond these general elements, there are also specific local architectural styles, such as the flat roofs, roofed chimneys, and radically extended balconies of the Alpujarra, the cave dwellings of Guadix and of Granada's Sacromonte, or the traditional architecture of the Marquisate of Zenete. The monumental architecture of the centuries immediately after the Reconquista often displayed an assertion of Christian hegemony through architecture that referenced non-Arab influences. Some of the greatest Renaissance buildings in Andalusia are from the then-kingdom of Jaén: the Jaén Cathedral, designed in part by Andrés de Vandelvira, served as a model for the Cathedral of Malaga and Guadix; the centers of Úbeda and Baeza, dating largely from this era, are UNESCO World Heritage Sites. Seville and its kingdom also figured prominently in this era, as is shown by the Casa consistorial de Sevilla, the Hospital de las Cinco Llagas, or the Charterhouse of Jerez de la Frontera. The Palace of Charles V in Granada is uniquely important for its Italianate purism. Andalusia also has such Baroque-era buildings as the Palace of San Telmo in Seville (seat of the current autonomic presidency), the Church of Our Lady of Reposo in Campillos, and the Granada Charterhouse. Academicism gave the region the Royal Tobacco Factory in Seville and Neoclassicism the nucleus of Cádiz, such as its , Royal Prison, and the Oratorio de la Santa Cueva. Revivalist architecture in the 19th and 20th centuries contributed the buildings of the Ibero-American Exposition of 1929 in Seville, including the Neo-Mudéjar Plaza de España. Andalusia also preserves an important industrial patrimony related to various economic activities. Besides the architecture of the cities, there is also much outstanding rural architecture: houses, as well as ranch and farm buildings and dog houses. Sculpture The Iberian reliefs of Osuna, Lady of Baza, and , the Phoenician sarcophagi of Cádiz, and the Roman sculptures of the Baetic cities such as Italica give evidence of traditions of sculpture in Andalusia dating back to antiquity. There are few significant surviving sculptures from the time of al-Andalus; two notable exceptions are the lions of the Alhambra and of the Maristán of Granada (the Nasrid hospital in the Albaicín). The Sevillian school of sculpture dating from the 13th century onward and the Granadan school beginning toward the end of the 16th century both focused primarily on Christian religious subject matter, including many wooden altarpieces. Notable sculptors in these traditions include Lorenzo Mercadante de Bretaña, , Juan Martínez Montañés, Pedro Roldán, , Jerónimo Balbás, Alonso Cano, and Pedro de Mena. Non-religious sculpture has also existed in Andalusia since antiquity. A fine example from the Renaissance era is the decoration of the Casa de Pilatos in Seville. Nonetheless, non-religious sculpture played a relatively minor role until such 19th-century sculptors as . Painting As in sculpture, there were and the schools of painting. The former has figured prominently in the history of Spanish art since the 15th century and includes such important artists as Zurbarán, Velázquez and Murillo, as well as art theorists such as Francisco Pacheco. The Museum of Fine Arts of Seville and the Prado contain numerous representative works of the Sevillian school of painting. A specific romantic genre known as costumbrismo andaluz depicts traditional and folkloric Andalusian subjects, such as bullfighting scenes, dogs, and scenes from Andalusia's history. Important artists in this genre include Manuel Barrón, José García Ramos, Gonzalo Bilbao and Julio Romero de Torres. The genre is well represented in the private Carmen Thyssen-Bornemisza Collection, part of which is on display at Madrid's Thyssen-Bornemisza Museum and Carmen Thyssen Museum in Málaga. Málaga also has been and is an important artistic center. Its most illustrious representative was Pablo Picasso, one of the most influential artists of the 20th century. The city has a Museum and Natal House Foundation, dedicated to the painter. Literature and philosophy Andalusia plays a significant role in the history of Spanish-language literature, although not all of the important literature associated with Andalusia was written in Spanish. Before 1492, there was the literature written in Andalusian Arabic. Hispano-Arabic authors native to the region include Ibn Hazm, Ibn Zaydún, Ibn Tufail, Al-Mu'tamid, Ibn al-Khatib, Ibn al-Yayyab, and Ibn Zamrak or Andalusian Hebrew poets as Solomon ibn Gabirol. Ibn Quzman, of the 12th century, crafted poems in the colloquial Andalusian language. In 1492 Antonio de Nebrija published his celebrated Gramática de la lengua castellana ("Grammar of the Castilian language"), the first such work for a modern European language. In 1528 Francisco Delicado wrote La lozana andaluza, a novel in the orbit of La Celestina, and in 1599 the Sevillian Mateo Alemán wrote the first part of Guzmán de Alfarache, the first picaresque novel with a known author. The prominent humanist literary school of Seville included such writers as Juan de Mal Lara, Fernando de Herrera, Gutierre de Cetina, Luis Barahona de Soto, Juan de la Cueva, Gonzalo Argote de Molina, and Rodrigo Caro. The Córdoban Luis de Góngora was the greatest exponent of the culteranismo of Baroque poetry in the Siglo de Oro; indeed, the style is often referred to as Góngorismo. Literary Romanticism in Spain had one of its great centers in Andalusia, with such authors as Ángel de Saavedra, 3rd Duke of Rivas, José Cadalso and Gustavo Adolfo Bécquer. Costumbrismo andaluz existed in literature as much as in visual art, with notable examples being the Escenas andaluzas of Serafín Estébanez Calderón and the works of Pedro Antonio de Alarcón. Andalusian authors Ángel Ganivet, Manuel Gómez-Moreno, Manuel and Antonio Machado, and Francisco Villaespesa are all generally counted in the Generation of '98. Also of this generation were the Quintero brothers, dramatists who faithfully captured Andalusian dialects and idiosyncrasies. Also of note, 1956 Nobel Prize-winning poet Juan Ramón Jiménez was a native of Moguer, near Huelva. A large portion of the avant-garde Generation of '27 who gathered at the Ateneo de Sevilla on the 300th anniversary of Góngora's death were Andalusians: Federico García Lorca, Luis Cernuda, Rafael Alberti, Manuel Altolaguirre, Emilio Prados, and 1977 Nobel laureate Vicente Aleixandre. Certain Andalusian fictional characters have become universal archetypes: Prosper Mérimée's gypsy Carmen, P. D. Eastman's Perro, Pierre Beaumarchais's Fígaro, and Tirso de Molina's Don Juan. As in most regions of Spain, the principal form of popular verse is the romance, although there are also strophes specific to Andalusia, such as the soleá or the . Ballads, lullabies, street vendor's cries, nursery rhymes, and work songs are plentiful. Among the philosophers native to the region can be counted Seneca, Avicebron, Maimonides, Averroes, Fernán Pérez de Oliva, Sebastián Fox Morcillo, Ángel Ganivet, Francisco Giner de los Ríos and María Zambrano. Music of Andalusia The music of Andalusia includes traditional and contemporary music, folk and composed music, and ranges from flamenco to rock. Conversely, certain metric, melodic and harmonic characteristics are considered Andalusian even when written or performed by musicians from elsewhere. Flamenco, perhaps the most characteristically Andalusian genre of music and dance, originated in the 18th century, but is based in earlier forms from the region. The influence of the traditional music and dance of the Romani people or Gypsies is particularly clear. The genre embraces distinct vocal (cante flamenco), guitar (toque flamenco), and dance (baile flamenco) styles. The Andalusian Statute of Autonomy reflects the cultural importance of flamenco in its Articles 37.1.18 and 68: Fundamental in the history of Andalusian music are the composers Cristóbal de Morales, Francisco Guerrero, Francisco Correa de Arauxo, Manuel García, Manuel de Falla, Joaquín Turina, and , as well as one of the fathers of modern classical guitar, the guitarist Andrés Segovia. Mention should also be made of the great folk artists of the copla (music) and the cante hondo, such as Rocío Jurado, Lola Flores (La Faraona, "the pharaoh"), Juanito Valderrama and the revolutionary Camarón de la Isla. Prominent Andalusian rock groups include Triana and Medina Azahara. The duo Los del Río from Dos Hermanas had international success with their "Macarena", including playing at a Super Bowl half-time show in the United States, where their song has also been used as campaign music by the Democratic Party. Other notables include the singer, songwriter, and poet Joaquín Sabina, Isabel Pantoja, Rosa López, who represented Spain at Eurovision in 2002, and David Bisbal. On November 16, 2023, Seville will host the 24th Annual Latin Grammy Awards at the FIBES Conference and Exhibition Centre, making Seville the first city outside of the United States to host the Latin Grammy Awards. Film The portrayal of Andalusia in film is often reduced to archetypes: flamenco, bullfighting, Catholic pageantry, brigands, the property-rich and cash-poor señorito andaluz and emigrants. These images particularly predominated from the 1920s through the 1960s, and helped to consolidate a clichéd image of the region. In a very different vein, the province of Almería was the filming location for many Westerns, especially (but by no means exclusively) the Italian-directed Spaghetti Westerns. During the dictatorship of Francisco Franco, this was the extent of the film industry in Andalusia. Nonetheless, Andalusian film has roots as far back as José Val del Omar in the pre-Franco years, and since the Spanish transition to democracy has brought forth numerous nationally and internationally respected directors: (Heart of the Earth), Chus Gutiérrez (Poniente), (Carlos Against the World), Alberto Rodríguez (7 Virgins), Benito Zambrano (Solas), and Antonio Banderas (Summer Rain). Counting together feature films, documentaries, television programs, music videos etc., Andalusia has boomed from 37 projects shooting in 1999 to 1,054 in 2007, with the figure for 2007 including 19 feature films. Although feature films are the most prestigious, commercials and television are currently more economically important to the region. The , headquartered in Córdoba, is a government-run entity in charge of the investigation, collection and diffusion of Andalusian cinematic heritage. Other important contributors to this last activity are such annual film festivals as the Málaga Spanish Film Festival, the most important festival dedicated exclusively to cinema made in Spain, the Seville European Film Festival (SEFF), the International Festival of Short Films—Almería in Short, the Huelva Festival of Latin American Film, the Atlantic Film Show in Cádiz, the Islantilla Festival of Film and Television and the African Film Festival of Tarifa. Culture Customs and society Each sub-region in Andalusia has its own unique customs that represent a fusion of Catholicism and local folklore. Cities like Almería have been influenced historically by both Granada and Murcia in the use of traditional head coverings. The sombrero de Labrador, a worker's hat made of black velvet, is a signature style of the region. In Cádiz, traditional costumes with rural origins are worn at bullfights and at parties on the large estates. The tablao flamenco dance and the accompanying cante jondo vocal style originated in Andalusia and traditionally most often performed by the gypsy (Gitanos). One of the most distinctive cultural events in Andalusia is the Romería de El Rocío in May. It consists of a pilgrimage to the Hermitage of El Rocío in the countryside near Almonte, in honor of the Virgin of El Rocío, an image of the Virgin and Child. In recent times the Romería has attracted roughly a million pilgrims each year. In Jaén, the saeta is a revered form of Spanish religious song, whose form and style has evolved over many centuries. Saetas evoke strong emotion and are sung most often during public processions. Verdiales, based upon the fandango, are a flamenco music style and song form originating in Almogia, near Málaga. For this reason, the Verdiales are sometimes known as Fandangos de Málaga. The region also has a rich musical tradition of flamenco songs, or palos called cartageneras. Seville celebrates Semana Santa, one of the better known religious events within Spain. During the festival, religious fraternities dress as penitents and carry large floats of lifelike wooden sculptures representing scenes of the Passion, and images of the Virgin Mary. Sevillanas, a type of old folk music sung and written in Seville and still very popular, are performed in fairs and festivals, along with an associated dance for the music, the Baile por sevillanas. All the different regions of Andalusia have developed their own distinctive customs, but all share a connectedness to Catholicism as developed during baroque Spain society. Andalusian Spanish Andalusian Spanish is one of the most widely spoken forms of Spanish in Spain, and because of emigration patterns was very influential on American Spanish. Rather than a single dialect, it is really a range of dialects sharing some common features; among these is the retention of more Arabic words than elsewhere in Spain, as well as some phonological differences compared with Standard Spanish. The isoglosses that mark the borders of Andalusian Spanish overlap to form a network of divergent boundaries, so there is no clear border for the linguistic region. A fringe movement promoting an Andalusian language independent from Spanish exists. Religion The territory now known as Andalusia fell within the sphere of influence of ancient Mediterranean mythological beliefs. Phoenician colonization brought the cults of Baal and Melqart; the latter lasted into Roman times as Hercules, mythical founder of both Cádiz and Seville. The Islote de Sancti Petri held the supposed tomb of Hercules, with representations of his Twelve labors; the region was the traditional site of the tenth labor, obtaining the cattle of the monster Geryon. Traditionally, the Pillars of Hercules flank the Strait of Gibraltar. Clearly, the European pillar is the Rock of Gibraltar; the African pillar was presumably either Monte Hacho in Ceuta or Jebel Musa in Morocco. The Roman road that led from Cádiz to Rome was known by several names, one of them being , Hercules route returning from his tenth labor. The present coat of arms of Andalusia shows Hercules between two lions, with two pillars behind these figures. Roman Catholicism is, by far, the largest religion in Andalusia. In 2012, the proportion of Andalusians that identify themselves as Roman Catholic was 78.8%. Spanish Catholic religion constitute a traditional vehicle of Andalusian cultural cohesion, and the principal characteristic of the local popular form of Catholicism is devotion to the Virgin Mary; Andalusia is sometimes known as la tierra de María Santísima ("the land of Most Holy Mary"). Also characteristic are the processions during Holy Week, in which thousands of penitents (known as nazarenos) sing saetas. Andalusia is the site of such pilgrim destinations as the in Andújar and the Hermitage of El Rocío in Almonte. Bullfighting While some trace the lineage of the Spanish Fighting Bull back to Roman times, today's fighting bulls in the Iberian peninsula and in the former Spanish Empire trace back to Andalusia in the 15th and 16th centuries. Andalusia remains a center of bull-rearing and bullfighting: its 227 fincas de ganado where fighting bulls are raised cover . In the year 2000, Andalusia's roughly 100 bullrings hosted 1,139 corridas. The oldest bullring still in use in Spain is the neoclassical Plaza de toros in Ronda, built in 1784. The Andalusian Autonomous Government sponsors the Rutas de Andalucía taurina, a touristic route through the region centered on bullfighting. Festivals The Andalusian festivals provide a showcase for popular arts and traditional costume. Among the most famous of these are the Seville Fair or Feria de Abril in Seville, now echoed by smaller fairs in Madrid and Barcelona, both of which have many Andalusian immigrants; the Feria de Agosto in Málaga; the Feria de Jerez or Feria del Caballo in Jerez; the in Granada; the in Córdoba; the Columbian Festivals (Fiestas Colombinas) in Huelva; the Feria de la Virgen del Mar in Almería; and the in Jaén, among many others. Festivals of a religious nature are a deep Andalusian tradition and are met with great popular fervor. There are numerous major festivals during Holy Week. An annual pilgrimage brings a million visitors to the Hermitage of El Rocío in Almonte (population 16,914 in 2008); similarly large crowds visit the Santuario de Nuestra Señora de la Cabeza in Andújar every April. Other important festivals are the Carnival of Cádiz and the Fiesta de las Cruces or Cruz de mayo in Granada and Córdoba; in Córdoba this is combined with a competition for among the patios (courtyards) of the city. Andalusia hosts an annual festival for the dance of flamenco in the summer-time. Cuisine The Andalusian diet varies, especially between the coast and the interior, but in general is a Mediterranean diet based on olive oil, cereals, legumes, vegetables, fish, dried fruits and nuts, and meat; there is also a great tradition of drinking wine. Fried fish—pescaíto frito—and seafood are common on the coast and also eaten well into the interior under coastal influence. Atlantic bluefin tuna (Thunnus thynnus) from the Almadraba areas of the Gulf of Cádiz, prawns from Sanlúcar de Barrameda (known as langostino de Sanlúcar), and deepwater rose shrimp () from Huelva are all highly prized. Fishing for the transparent goby or chanquete (Aphia minuta), a once-popular small fish from Málaga, is now banned because the techniques used to catch them trap too many immature fish of other species. The mountainous regions of the Sierra Morena and Sierra Nevada produce cured hams, notably including jamón serrano and jamón ibérico. These come from two different types of pig, (jamón serrano from white pigs, the more expensive jamón ibérico from the Black Iberian pig). There are several denominaciones de origen, each with its own specifications including in just which microclimate region ham of a particular denomination must be cured. Plato alpujarreño is another mountain specialty, a dish combining ham, sausage, sometimes other pork, egg, potatoes, and olive oil. Confectionery is popular in Andalusia. Almonds and honey are common ingredients. Many enclosed convents of nuns make and sell pastries, especially Christmas pastries: mantecados, polvorones, pestiños, alfajores, , as well as churros or , meringue cookies (merengadas), and . Cereal-based dishes include migas de harina in eastern Andalusia (a similar dish to couscous rather than the fried breadcrumb based migas elsewhere in Spain) and a sweeter, more aromatic porridge called poleá in western Andalusia. Vegetables form the basis of such dishes as (similar to ratatouille) and the chopped salad known as or . Hot and cold soups based in olive oil, garlic, bread, tomato and peppers include gazpacho, salmorejo, porra antequerana, ajo caliente, sopa campera, or—using almonds instead of tomato—ajoblanco. Wine has a privileged place at the Andalusian table. Andalusian wines are known worldwide, especially fortified wines such as sherry (jerez), aged in soleras. These are enormously varied; for example, dry sherry may be the very distinct fino, manzanilla, amontillado, oloroso, or Palo Cortado and each of these varieties can each be sweetened with Pedro Ximénez or Moscatel to produce a different variety of sweet sherry. Besides sherry, Andalucía has five other denominaciones de origen for wine: D.O. Condado de Huelva, D.O. Manzanilla-Sanlúcar de Barrameda, D.O. Málaga, D.O. Montilla-Moriles, and D.O. Sierras de Málaga. Most Andalusian wine comes from one of these regions, but there are other historic wines without a Protected Geographical Status, for example Tintilla de Rota, Pajarete, Moscatel de Chipiona and Mosto de Umbrete. Andalusia also produces D.O. vinegar and brandy: D.O. Vinagre de Jerez and D.O. Brandy de Jerez. Other traditions The traditional dress of 18th-century Andalusia was strongly influenced by within the context of casticismo (purism, traditionalism, authenticity). The archetype of the majo and maja was that of a bold, pure Spaniard from a lower-class background, somewhat flamboyant in his or her style of dress. This emulation of lower-class dress also extended to imitating the clothes of brigands and Romani ("Gypsy") women. The Museum of Arts and Traditions of Sevilla has collected representative samples of a great deal of the history of Andalusian dress, including examples of such notable types of hat as the sombrero cordobés, sombrero calañés, sombrero de catite and the , as well as the traje corto and traje de flamenca. Andalusia has a great artisan tradition in tile, leather (see Shell cordovan), weaving (especially of the heavy jarapa cloth), marquetry, and ceramics (especially in Jaén, Granada, and Almería), lace (especially Granada and Huelva), embroidery (in Andévalo), ironwork, woodworking, and basketry in wicker, many of these traditions a heritage of the long period of Muslim rule. Andalusia is also known for its dogs, particularly the Andalusian Hound, which was originally bred in the region. Dogs, not just andalusian hounds, are very popular in the region. Andalusian equestrianism, institutionalized in the Royal Andalusian School of Equestrian Art is known well beyond the borders of Spain. The Andalusian horse is strongly built, compact yet elegant, distinguished in the area of dressage and show jumping, and is also an excellent horse for driving. They are known for their elegant "dancing" gait. Sports Team sports In Andalusia, as throughout Spain, football is the predominant sport. Introduced to Spain by British men who worked in mining for Rio Tinto in the province of Huelva, the sport soon became popular with the local population. As Spain's oldest existing football club, Recreativo de Huelva, founded 1889, is known as El Decano ("the Dean"). For the 2023–24 season, five Andalusian clubs compete in Spain's First Division La Liga: Cádiz CF, Real Betis, Sevilla FC, Granada CF and UD Almería. Betis won La Liga in 1934–35 and Sevilla in the 1945–46 season. The other Andalusian teams, Málaga CF play in the Segunda División, Córdoba CF play in the Primera Federación, whilst Recreativo de Huelva, participate in the Segunda Federación, and Marbella FC and Real Jaén participate in the Tercera División. The Andalusia autonomous football team is not in any league, and plays only friendly matches. In recent years, they have played mostly during the Christmas break of the football leagues. They play mostly against national teams from other countries, but would not be eligible for international league play, where Spain is represented by a single national team. In recent decades, basketball has become increasingly popular, with CB Málaga, also known as Unicaja Málaga who have won the Liga ACB in 2007 and the Korać Cup in 2001 and usually play the Euroleague, CB Sevilla (Banca Cívica) and CB Granada competing at the top level in the Liga ACB. Unlike basketball, handball has never really taken off in Andalusia. There is one Andalusian team in the Liga Asobal, Spain's premier handball league: BM Puente Genil, playing in the province of Córdoba. Andalusia's strongest showing in sports has been in table tennis. There are two professional teams: Cajasur Priego TM and Caja Granada TM, the latter being Spain's leading table tennis team, with more than 20 league championships in nearly consecutive years and 14 consecutive Copas del Rey, dominating the Liga ENEBÉ. Cajasur is also one of the league's leading teams. Olympics 220 Andalusian athletes have competed in a total of 16 summer or winter Olympic Games. The first was Leopoldo Sainz de la Maza, part of the silver medal-winning polo team at the 1920 Summer Olympics in Antwerp, Belgium. In all, Andalusians have won six gold medals, 11 silver, and two bronze. Winners of multiple medals include the Córdoban boxer Rafael Lozano (bronze in the 1996 Summer Olympics at Atlanta, Georgia, US, and silver in the 2000 Summer Olympics in Sydney, Australia); sailor Theresa Zabell, Malagueña by adoption (gold medals at Barcelona in 1992 and Atlanta in 1996). Other notable winners have been Granadan tennis player Manuel Orantes (silver in the men's singles of the demonstration tournament in Mexico City in 1968), Jerezano riders Ignacio Rambla and Rafael Soto (silver in dressage in Athens in 2004) and the racewalker Paquillo Fernández from Guadix (silver in Athens in 2004). The largest number of Olympic appearances were by the Malagueña swimmer María Peláez (five appearances), the Granadan skier María José Rienda (four), the Sevillian rider Luis Astolfi (four), and the Sevillian rower Fernando Climent (four, including a silver at Los Angeles, California, US, in 1984. Seville has been a pre-candidate to host the Summer Olympics in two occasions, 2004 and 2008, and Granada has been a pre-candidate to host the winter Olympics; neither has ever succeeded in its candidature. The ski resort of Sierra Nevada, near Granada, has however hosted the 1996 Alpine World Ski Championships, and Granada hosted the 2015 Winter Universiade. Other sports Other sporting events in Andalusia include surfing, kitesurfing and windsurfing competitions at Tarifa, various golf tournaments at courses along the coast, and horse racing and polo at several locations in the interior. Andalusia hosted the 1999 World Championships in Athletics (Seville), the 2005 Mediterranean Games (Almería) and the FIS Alpine World Ski Championships 1996 (Granada), among other major events. There is also the annual Vuelta a Andalucía bicycle road race and the Linares chess tournament. The Circuito de Jerez, located near Jerez de la Frontera, hosts the Spanish motorcycle Grand Prix. Twinning and covenants Andalusia has had a sister region relationship with Buenos Aires (Argentina), since 2001; and with Córdoba (Argentina). Also Andalusia has a collaboration agreement with Guerrero (Mexico). See also Andalusian people Andalusian nationalism Azulejo List of Andalusians List of the oldest mosques Roman Bética Route San Juan De Los Terreros White Towns of Andalusia Yeseria References External links Official site – Junta de Andalucia Andalucia Tourism Site Andalucia page at the guardian Autonomous communities of Spain NUTS 2 statistical regions of the European Union States and territories established in 1981 States and territories established in 2007
2745
https://en.wikipedia.org/wiki/Azad%20Kashmir
Azad Kashmir
Azad Jammu and Kashmir (; , ), abbreviated as AJK and colloquially referred to as simply Azad Kashmir, is a region administered by Pakistan as a nominally self-governing entity and constituting the western portion of the larger Kashmir region, which has been the subject of a dispute between India and Pakistan since 1947. Azad Kashmir also shares borders with the Pakistani provinces of Punjab and Khyber Pakhtunkhwa to the south and west, respectively. On its eastern side, Azad Kashmir is separated from the Indian union territory of Jammu and Kashmir (part of Indian-administered Kashmir) by the Line of Control (LoC), which serves as the de facto border between the Indian- and Pakistani-controlled parts of Kashmir. Geographically, it covers a total area of and has a total population of 4,045,366 as per the 2017 national census. The territory has a parliamentary form of government modelled after the British Westminster system, with the city of Muzaffarabad serving as its capital. The President of AJK is the constitutional head of state, while the Prime Minister, supported by a Council of Ministers, is the chief executive. The unicameral Azad Kashmir Legislative Assembly elects both the Prime Minister and President. The territory has its own Supreme Court and a High Court, while the Government of Pakistan's Ministry of Kashmir Affairs and Gilgit-Baltistan serves as a link between itself and Azad Jammu and Kashmir's government, although the autonomous territory is not represented in the Parliament of Pakistan. Northern Azad Kashmir lies in a region that experiences strong vibrations of the earth as a result of the Indian plate underthrusting the Eurasian plate. A major earthquake in 2005 killed at least 100,000 people and left another three million people displaced, causing widespread devastation to the region's infrastructure and economy. Since then, with help from the Government of Pakistan and foreign aid, reconstruction of infrastructure is underway. Azad Kashmir's economy largely depends on agriculture, services, tourism, and remittances sent by members of the British Mirpuri community. Nearly 87% of Azad Kashmiri households own farm property, and the region has the highest rate of school enrollment in Pakistan and a literacy rate of approximately 74%. Name Azad Kashmir (Free Kashmir) was the title of a pamphlet issued by the Muslim Conference party at its 13th general session held in 1945 at Poonch. It is believed to have been a response to the National Conference's Naya Kashmir (New Kashmir) programme. Sources state that it was no more than a compilation of various resolutions passed by the party. But its intent seems to have been to declare that the Muslims of Jammu and Kashmir were committed to the Muslim League's struggle for a separate homeland (Pakistan), and that the Muslim Conference was the sole representative organisation of the Muslims of Kashmir. However, the following year, the party passed an "Azad Kashmir resolution" demanding that the maharaja institute a constituent assembly elected on an extended franchise. According to scholar Chitralekha Zutshi, the organisation's declared goal was to achieve responsible government under the aegis of the maharaja without association with either India or Pakistan. The following year, the party workers assembled at the house of Sardar Ibrahim on 19 July 1947 reversed the decision, demanding that the maharaja accede to Pakistan. Soon afterward, Sardar Ibrahim escaped to Pakistan and led the Poonch rebellion from there, with the assistance of Pakistan's prime minister Liaquat Ali Khan and other officials. Liaquat Ali Khan appointed a committee headed by Mian Iftikharuddin to draft a "declaration of freedom". On 4 October an Azad Kashmir provisional government was declared in Lahore with Ghulam Nabi Gilkar as president under the assumed name "Mr. Anwar" and Sardar Ibrahim as the prime minister. Gilkar travelled to Srinagar and was arrested by the maharaja's government. Pakistani officials subsequently appointed Sardar Ibrahim as the president of the provisional government. Geography The northern part of Azad Jammu and Kashmir encompasses the lower area of the Himalayas, including Jamgarh Peak (). However, Sarwali Peak (6326 m) in Neelum Valley is the highest peak in the state. The region receives rainfall in both the winter and the summer. Muzaffarabad and Pattan are among the wettest areas of Pakistan. Throughout most of the region, the average rainfall exceeds 1400 mm, with the highest average rainfall occurring near Muzaffarabad (around 1800 mm). During the summer season, monsoon floods of the rivers Jhelum and Leepa are common due to extreme rains and snow melting. Climate The southern parts of Azad Kashmir, including the Bhimber, Mirpur, and Kotli districts, have extremely hot weather in the summer and moderate cold weather in the winter. They receive rain mostly in monsoon weather. In the central and northern parts of the state, the weather remains moderately hot in the summer and cold and chilly in the winter. Snowfall also occurs there in December and January. The region receives rainfall in both the winter and the summer. Muzaffarabad and Pattan are among the wettest areas of the state, but they don't receive snow. Throughout most of the region, the average rainfall exceeds 1400 mm, with the highest average rainfall occurring near Muzaffarabad (around 1800 mm). During summer, monsoon floods of the Jhelum and Leepa rivers are common, due to high rainfall and melting snow. History At the time of the Partition of India in 1947, the British abandoned their suzerainty over the princely states, which were left with the options of joining India or Pakistan or remaining independent. Hari Singh, the maharaja of Jammu and Kashmir, wanted his state to remain independent. Muslims in the western districts of the Jammu province (current day Azad Kashmir) and in the Frontier Districts province (current day Gilgit-Baltistan) had wanted to join Pakistan. In Spring 1947, an uprising against the maharaja broke out in Poonch, an area bordering the Rawalpindi division of West Punjab. The maharaja's administration is said to have started levying punitive taxes on the peasantry which provoked a local revolt and the administration resorted to brutal suppression. The area's population, swelled by recently demobilised soldiers following World War II, rebelled against the maharaja's forces and gained control of almost the entire district. Following this victory, the pro-Pakistan chieftains of the western districts of Muzaffarabad, Poonch and Mirpur proclaimed a provisional Azad Jammu and Kashmir government in Rawalpindi on October 3, 1947. Ghulam Nabi Gilkar, under the assumed name "Mr. Anwar," issued a proclamation in the name of the provisional government in Muzaffarabad. However, this government quickly fizzled out with the arrest of Anwar in Srinagar. On October 24, a second provisional government of Azad Kashmir was established at Palandri under the leadership of Sardar Ibrahim Khan. On October 21, several thousand Pashtun tribesmen from North-West Frontier Province poured into Jammu and Kashmir to help with the rebellion against the maharaja's rule. They were led by experienced military leaders and were equipped with modern arms. The maharaja's crumbling forces were unable to withstand the onslaught. The tribesmen captured the towns of Muzaffarabad and Baramulla, the latter of which is northwest of the state capital Srinagar. On October 24, the Maharaja requested military assistance from India, which responded that it was unable to help him unless he acceded to India. Accordingly, on October 26, 1947, Maharaja Hari Singh signed an Instrument of Accession, handing over control of defence, external affairs, and communications to the Government of India in return for military aid. Indian troops were immediately airlifted into Srinagar. Pakistan intervened subsequently. Fighting ensued between the Indian and Pakistani armies, with the two areas of control more or less stabilised around what is now known as the "Line of Control". India later approached the United Nations, asking it to resolve the dispute, and resolutions were passed in favour of the holding of a plebiscite with regard to Kashmir's future. However, no such plebiscite has ever been held on either side, since there was a precondition that required the withdrawal of the Pakistani army along with the non-state elements and the subsequent partial withdrawal of the Indian army. from the parts of Kashmir under their respective control – a withdrawal that never took place. In 1949, a formal cease-fire line separating the Indian- and Pakistani-controlled parts of Kashmir came into effect. Following the 1949 cease-fire agreement with India, the government of Pakistan divided the northern and western parts of Kashmir that it controlled at the time of the cease-fire into the following two separately controlled political entities: Azad Jammu and Kashmir (AJK) – the narrow, southern part, long, with a width varying from . Gilgit–Baltistan formerly called the Federally Administered Northern Areas (FANA) – the much larger political entity to the north of AJK with an area of . In 1955, the Poonch uprising broke out. It was largely concentrated in areas of Rawalakot as well as the rest of Poonch Division. It ended in 1956. At one time under Pakistani control, Kashmir's Shaksgam tract, a small region along the northeastern border of Gilgit–Baltistan, was provisionally ceded by Pakistan to the People's Republic of China in 1963 and now forms part of China's Xinjiang Uygur Autonomous Region. In 1972, the then current border between the Indian and Pakistani controlled parts of Kashmir was designated as the "Line of Control". This line has remained unchanged since the 1972 Simla Agreement, which bound the two countries "to settle their differences by peaceful means through bilateral negotiations". Some political experts claim that, in view of that pact, the only solution to the issue is mutual negotiation between the two countries without involving a third party such as the United Nation. The 1974 Interim Constitution Act was passed by the 48-member Azad Jammu and Kashmir unicameral assembly. In April 1997, the Nawaz Sharif government refused to grant constitutional status to Azad Jammu and Kashmir stating that "'The grant of constitutional rights to these people will amount to unilateral annexation of these areas." Government Azad Jammu and Kashmir (AJK) is nominally a self-governing state, but ever since the 1949 ceasefire between Indian and Pakistani forces, Pakistan has exercised control over the state without incorporating it into Pakistan. Azad Kashmir has its own elected president, prime minister, legislative assembly, high court (with Azam Khan as its present chief justice), and official flag. Azad Kashmir's budget and tax affairs, are dealt with by the Azad Jammu and Kashmir Council rather than by Pakistan's Central Board of Revenue. The Azad Jammu and Kashmir Council is a supreme body consisting of 14 members, 8 from the government of Azad Jammu and Kashmir and 6 from the government of Pakistan. Its chairman/chief executive is the prime minister of Pakistan. Other members of the council are the president and the prime minister of Azad Kashmir (or an individual nominated by her/him) and 6 members of the AJK Legislative Assembly. Azad Kashmir Day is celebrated in Azad Jammu and Kashmir on October 24, which is the day that the Azad Jammu and Kashmir government was created in 1947. Pakistan has celebrated Kashmir Solidarity Day on February 5 of each year since 1990 as a day of protest against India's de facto sovereignty over its State of Jammu and Kashmir. That day is a national holiday in Pakistan. Pakistan observes the Kashmir Accession Day as Black Day on October 27 of each year since 1947 as a day of protest against the accession of Jammu and Kashmir State to India and its military presence in the Indian-controlled parts of Jammu and Kashmir. Brad Adams, the Asia director at the U.S.-based NGO Human Rights Watch said in 2006: "Although 'azad' means 'free,' the residents of Azad Kashmir are anything but; the Pakistani authorities govern the Azad Kashmir government with tight controls on basic freedoms." Scholar Christopher Snedden has observed that despite tight controls, the people of Azad Kashmir have generally accepted whatever Pakistan has done to them, which in any case has varied little from how most Pakistanis have been treated (by Pakistan). According to Christopher Snedden, one of the reasons for this was that the people of Azad Kashmir had always wanted to be part of Pakistan. Consequently, having little to fear from a pro-Pakistan population devoid of options, Pakistan imposed its will through the Federal Ministry of Kashmir Affairs and failed to empower the people of Azad Kashmir, allowing genuine self-government for only a short period in the 1970s. According to the interim constitution that was drawn up in the 1970s, the only political parties that are allowed to exist are those that pay allegiance to Pakistan: "No person or political party in Azad Jammu and Kashmir shall be permitted... activities prejudicial or detrimental to the State's accession to Pakistan." The pro-independence Jammu and Kashmir Liberation Front has never been allowed to contest elections in Azad Kashmir. While the interim constitution does not give them a choice, the people of Azad Kashmir have not considered any option other than joining Pakistan. Except in a legal sense, Azad Kashmir has been fully integrated into Pakistan. Azad Kashmir is home to a vibrant civil society. One of the organizations active in the territory and inside Pakistan is YFK-International Kashmir Lobby Group, an NGO that seeks better India-Pakistan relations through conflict resolution in Kashmir. Development According to the project report by the Asian Development Bank, the bank has set out development goals for Azad Kashmir in the areas of health, education, nutrition, and social development. The whole project is estimated to cost US$76 million. Germany, between 2006 and 2014, has also donated $38 million towards the AJK Health Infrastructure Programme. Administrative divisions The state is administratively divided into three divisions which, in turn, are divided into ten districts. Demographics Population The population of Azad Kashmir, according to the preliminary results of the 2017 Census, is 4.045 million. The website of the AJK government reports the literacy rate to be 74%, with the enrolment rate in primary school being 98% and 90% for boys and girls respectively. The population of Azad Kashmir is almost entirely Muslim. The people of this region culturally differ from the Kashmiris living in the Kashmir Valley of Jammu and Kashmir and are closer to the culture of Jammu. Mirpur, Kotli, and Bhimber are all old towns of the Jammu region. Religion Azad Jammu and Kashmir has an almost entirely Muslim population. According to data maintained by Christian community organizations, there are around 4,500 Christian residents in the region. Bhimber is home to most of them, followed by Mirpur and Muzaffarabad. A few dozen families also live in Kotli, Poonch, and Bagh. However, the Christian community has been struggling to get residential status and property rights in AJK. There is no official data on the total number of Bahais in AJK. Only six Bahai families are known to be living in Muzaffarabad with others living in rural areas. The followers of the Ahmadi faith are estimated to be somewhere between 20,000 and 25,000, and most of them live in Kotli, Mirpur, Bhimber, and Muzaffarabad. Ethnic groups Christopher Snedden writes that most of the native residents of Azad Kashmir are not of Kashmiri ethnicity; rather, they could be called "Jammuites" due to their historical and cultural links with that region, which is coterminous with neighbouring Punjab and Hazara. Because their region was formerly a part of the princely state of Jammu and Kashmir and is named after it, many Azad Kashmiris have adopted the "Kashmiri" identity, whereas in an ethnolinguistic context, the term "Kashmiri" would ordinarily refer to natives of the Kashmir Valley region. The population of Azad Kashmir has strong historical, cultural and linguistic affinities with the neighbouring populations of upper Punjab and Potohar region of Pakistan, whereas the Sudhans have the oral tradition of the Pashtuns. The main communities living in this region are: Gujjars – They are an agricultural tribe and are estimated to be the largest community living in the ten districts of Azad Kashmir. Sudhans – (also known as Sadozai, Sardar) are the second largest tribe, living mainly in the districts of Poonch, Sudhanoti, Bagh, and Kotli in Azad Kashmir, and allegedly originating from the Pashtun areas. Together with the Rajputs, they are the source of most of Azad Kashmir's political leaders. Jats – They are one of the larger communities of AJK and primarily inhabit the districts of Mirpur, Bhimber, and Kotli. A large Mirpuri population lives in the U.K. and it is estimated that more people of Mirpuri origins are now residing in the U.K. than in the Mirpur district, which retains strong ties with the U.K. Rajputs – They are spread across the territory, and they number a little under half a million. Together with the Sundhans, they are the source of most of Azad Kashmir's political class. Mughals – Largely located in the Bagh and Muzaffarabad districts. Awans – A clan with significant numbers in Azad Jammu and Kashmir, living mainly in the Bagh, Poonch, Hattian Bala, and Muzaffarabad. Awans also reside in Punjab and Khyber Pakhtunkhwa in large numbers. Dhund – They are a large clan in Azad Jammu and Kashmir and live mostly in the Bagh, Hattian Bala, and Muzaffarabad districts. They also inhabit Abbottabad and upper Potohar Punjab in large numbers. Kashmiris – Ethnic Kashmiri populations are found in the Neelam Valley and the Leepa Valley (see Kashmiris in Azad Kashmir). Languages The official language of Azad Kashmir is Urdu, while English is used in higher domains. The majority of the population, however, are native speakers of other languages. The foremost among these is Pahari–Pothwari with its various dialects. There are also sizeable communities speaking Kashmiri (mostly in the north), Gujari (throughout the territory), and Dogri (in the south), as well as pockets of speakers of Kundal Shahi, Shina and Pashto. With the exception of Pashto and English, those languages belong to the Indo-Aryan language family. The dialects of the Pahari-Pothwari language complex cover most of the territory of Azad Kashmir. Those are also spoken across the Line of Control in the neighbouring areas of Indian Jammu and Kashmir and are closely related both to Punjabi to the south and Hinko to the northwest. The language variety in the southern districts of Azad Kashmir is known by a variety of names – including Mirpuri, Pothwari and Pahari – and is closely related to the Pothwari proper spoken to the east in the Pothohar region of Punjab. The dialects of the central districts of Azad Kashmir are occasionally referred to in the literature as Chibhali or Punchi, but the speakers themselves usually call them Pahari, an ambiguous name that is also used for several unrelated languages of the lower Himalayas. Going north, the speech forms gradually change into Hindko. Today, in the Muzaffarabad District the preferred local name for the language is Hindko, although it is still apparently more closely related to the core dialects of Pahari. Further north in the Neelam Valley the dialect, locally also known as Parmi, can more unambiguously be subsumed under Hindko. Another major language of Azad Kashmir is Gujari. It is spoken by several hundred thousand people among the traditionally nomadic Gujars, many of whom are nowadays settled. Not all ethnic Gujars speak Gujari, the proportion of those who have shifted to other languages is probably higher in southern Azad Kashmir. Gujari is most closely related to the Rajasthani languages (particularly Mewati), although it also shares features with Punjabi. It is dispersed over large areas in northern Pakistan and India. Within Pakistan, the Gujari dialects of Azad Kashmir are more similar, in terms of shared basic vocabulary and mutual intelligibility, to the Gujar varieties of the neighbouring Hazara region than to the dialects spoken further to the northwest in Khyber Pakhtunkhwa and north in Gilgit. There are scattered communities of Kashmiri speakers, notably in the Neelam Valley, where they form the second-largest language group after speakers of Hindko. There have been calls for the teaching of Kashmiri (particularly in order to counter India's claim of promoting the culture of Kashmir), but the limited attempts at introducing the language at the secondary school level have not been successful, and it is Urdu, rather than Kashmiri, that Kashmiri Muslims have seen as their identity symbol. There is an ongoing process of gradual shift to larger local languages, but at least in the Neelam Valley there still exist communities for whom Kashmiri is the sole mother tongue. There are speakers of Dogri in the southernmost district of Bhimber, where they are estimated to represent almost a third of the district's population. In the northernmost district of Neelam, there are small communities of speakers of several other languages. Shina, which like Kashmiri belongs to the broad Dardic group, is present in two distinct varieties spoken altogether in three villages. Pashto, of the Iranian subgroup and the majority language in the neighbouring province of Khyber Pakhtunkhwa, is spoken in two villages, both situated on the Line of Control. The endangered Kundal Shahi is native to the eponymous village and it is the only language not found outside Azad Kashmir. Economy As of 2021, GDP of Azad Jammu and Kashmir was estimated to be 10 billion pounds, giving per capita an income of £5604. Historically the economy of Azad Kashmir has been agricultural which meant that land was the main source or mean of production. This means that all food for immediate and long-term consumption was produced from the land. The produce included various crops, fruits, vegetables, etc. The land was also the source of other livelihood necessities such as wood, fuel, grazing for animals which then turned into dairy products. Because of this land was also the main source of revenue for the governments whose primary purpose for centuries was to accumulate revenue. Agriculture is a major part of Azad Kashmir's economy. Low-lying areas that have high populations grow crops like barley, mangoes, millet, corn (maize), and wheat, and also raise cattle. In the elevated areas that are less populated and more spread out, forestry, corn, and livestock are the main sources of income. There are mineral and marble resources in Azad Kashmir close to Mirpur and Muzaffarabad. There are also graphite deposits at Mohriwali. There are also reservoirs of low-grade coal, chalk, bauxite, and zircon. Local household industries produce carved wooden objects, textiles, and dhurrie carpets. There is also an arts and crafts industry that produces such cultural goods as namdas, shawls, pashmina, pherans, Papier-mâché, basketry copper, rugs, wood carving, silk and woolen clothing, patto, carpets, namda gubba, and silverware. Agricultural goods produced in the region include mushrooms, honey, walnuts, apples, cherries, medicinal herbs and plants, resin, deodar, kail, chir, fir, maple, and ash timber. The migration to the UK was accelerated and by the completion of Mangla Dam in 1967 the process of 'chain migration' became in full flow. Today, remittances from British Mirpuri community make a critical role in AJK's economy. In the mid-1950s various economic and social development processes were launched in Azad Kashmir. In the 1960s, with the construction of the Mangla Dam in Mirpur District, the Azad Jammu and Kashmir Government began to receive royalties from the Pakistani government for the electricity that the dam provided to Pakistan. During the mid-2000s, a multibillion-dollar reconstruction began in the aftermath of the 2005 Kashmir earthquake. In addition to agriculture, textiles, and arts and crafts, remittances have played a major role in the economy of Azad Kashmir. One analyst estimated that the figure for Azad Kashmir was 25.1% in 2001. With regard to annual household income, people living in the higher areas are more dependent on remittances than are those living in the lower areas. In the latter part of 2006, billions of dollars for development were mooted by international aid agencies for the reconstruction and rehabilitation of earthquake-hit zones in Azad Kashmir, though much of that amount was subsequently lost in bureaucratic channels, leading to considerable delays in help getting to the neediest. Hundreds of people continued to live in tents long after the earthquake. A land-use plan for the city of Muzaffarabad was prepared by the Japan International Cooperation Agency. Tourist destinations in the area include the following: Muzaffarabad, the capital city of Azad Kashmir, is located on the banks of the Jhelum and Neelum rivers. It is from Rawalpindi and Islamabad. Well-known tourist spots near Muzaffarabad are the Red Fort, Pir Chinassi, Patika, Subri Lake and Awan Patti. The Neelam Valley is situated to the north and northeast of Muzaffarabad, The gateway to the valley. The main tourist attractions in the valley are Athmuqam, Kutton, Keran, Changan, Sharda, Kel, Arang Kel and Taobat. Sudhanoti is one of the eight districts of Azad Kashmir in Pakistan. Sudhanoti is located away from Islamabad, the Capital of Pakistan. It is connected with Rawalpindi and Islamabad through Azad Pattan road. Rawalakot city is the headquarters of Poonch District and is located from Islamabad. Tourist attractions in Poonch District are Banjosa Lake, Devi Gali, Tatta Pani, and Toli Pir. Bagh city, the headquarters of Bagh District, is from Islamabad and from Muzaffarabad. The principal tourist attractions in Bagh District are Bagh Fort, Dhirkot, Sudhan Gali, Ganga Lake, Ganga Choti, Kotla Waterfall, Neela Butt, Danna, Panjal Mastan National Park, and Las Danna. The Leepa Valley is located southeast of Muzaffarabad. It is the most charming and scenic place for tourists in Azad Kashmir. New Mirpur City is the headquarters of Mirpur District. The main tourist attractions near New Mirpur City are the Mangla Lake and Ramkot Fort. Education The literacy rate in Azad Kashmir was 62% in 2004, higher than in any other region of Pakistan. The current literacy rate of Azad Kashmir is 76.60% in 2018. And it remained at 79.80% in 2019. According to the 2020–2021 census, the literacy rate in Azad Kashmir was 91.34%. However, only 2.2% were graduates, compared to the average of 2.9% for Pakistan. Universities The following is a list of universities recognised by Higher Education Commission of Pakistan (HEC): * Granted university status. Cadet College Pallandri Cadet College Palandri is situated about from Islamabad Medical colleges The following is a list of undergraduate medical institutions recognised by Pakistan Medical and Dental Council (PMDC) . Mohtarma Benazir Bhutto Shaheed Medical College in Mirpur Azad Jammu Kashmir Medical College in Muzafarabad Poonch Medical College in Rawalakot Private medical colleges Mohi-ud-Din Islamic Medical College in Mirpur Sports Football, cricket, and volleyball are very popular in Azad Kashmir. Many tournaments are also held throughout the year and in the holy month of Ramazan, night-time flood-lit tournaments are also organised. Azad Kashmir has its own T20 tournament called the Kashmir Premier League, which started in 2021. New Mirpur City has a cricket stadium (Quaid-e-Azam Stadium) which has been taken over by the Pakistan Cricket Board for renovation to bring it up to the international standards. There is also a cricket stadium in Muzaffarabad with a capacity of 8,000 people. This stadium has hosted 8 matches of the Inter-District Under 19 Tournament 2013. There are also registered football clubs: Pilot Football Club Youth Football Club Kashmir National FC Azad Super FC Culture Tourism Notable people Saif Ali Janjua, recipient of Nishan-e-Haider. Aziz Khan, 11th Chairman Joint Chief of Staff Committee (CJCSC) of Pakistan Armed Forces. Khan Muhammad Khan, politician from Poonch who served as the Chairman of the War Council during the 1947 Poonch Rebellion. Muhammad Hayyat Khan, former President of Azad Kashmir. Sardar Ibrahim Khan, first and longest-serving President of Azad Kashmir. Masood Khan, former President of Azad Kashmir and current Pakistani ambassador to the United States. Zaman Khan, cricketer currently playing for the Pakistani national cricket team. Khalid Mahmood, British politician and Labour MP for Birmingham Perry Barr. Mohammad Yasin, British politician and Labour MP for Bedford. See also Northern Pakistan 1941 Census of Jammu and Kashmir Kashmir conflict Tourism in Azad Kashmir List of cultural heritage sites in Azad Kashmir Trans-Karakoram Tract Notes References Sources Further reading External links Planning & Development Department AJ&K AJ&K Planning and Development Department AJ&K Tourism & Archaeology Department Tourism in Azad Kashmir Disputed territories in Asia Foreign relations of Pakistan States and territories established in 1947 Subdivisions of Pakistan Territorial disputes of India 2005 Kashmir earthquake Countries and territories where Urdu is an official language Kashmiri-speaking countries and territories
2754
https://en.wikipedia.org/wiki/AutoCAD%20DXF
AutoCAD DXF
AutoCAD DXF (Drawing Interchange Format, or Drawing Exchange Format) is a CAD data file format developed by Autodesk for enabling data interoperability between AutoCAD and other programs. DXF was introduced in December 1982 as part of AutoCAD 1.0, and was intended to provide an exact representation of the data in the AutoCAD native file format, DWG (Drawing). For many years, Autodesk did not publish specifications, making correct creation of DXF files difficult. Autodesk now publishes the incomplete DXF specifications online. Versions of AutoCAD from Release 10 (October 1988) and up support both ASCII and binary forms of DXF. Earlier versions support only ASCII. As AutoCAD has become more powerful, supporting more complex object types, DXF has become less useful. Certain object types, including ACIS solids and regions, are not documented. Other object types, including AutoCAD 2006's dynamic blocks, and all of the objects specific to the vertical market versions of AutoCAD, are partially documented, but not well enough to allow other developers to support them. For these reasons many CAD applications use the DWG format which can be licensed from Autodesk or non-natively from the Open Design Alliance. DXF files do not specify the units of measurement used for its coordinates and dimensions. Most CAD systems and many vector graphics packages support the import and export of DXF files, notably Adobe products, Inkscape & Blender. Some CAD systems use DXF as their native format, notably QCAD and LibreCAD. File structure ASCII versions of DXF can be read with any text editor. The basic organization of a DXF file is as follows: section General information about the drawing. Each parameter has a variable name and an associated value. section Holds the information for application-defined classes whose instances appear in the , , and sections of the database. Generally does not provide sufficient information to allow interoperability with other programs. section This section contains definitions of named items. Application ID () table Block Record () table Dimension Style () table Layer () table Linetype () table Text style () table User Coordinate System () table View () table Viewport configuration () table section This section contains Block Definition entities describing the entities comprising each Block in the drawing. section This section contains the drawing entities, including any Block References. section Contains the data that apply to nongraphical objects, used by AutoLISP, and ObjectARX applications. section Contains the preview image for the DXF file. The data format of a DXF is called a "tagged data" format, which "means that each data element in the file is preceded by an integer number that is called a group code. A group code's value indicates what type of data element follows. This value also indicates the meaning of a data element for a given object (or record) type. Virtually all user-specified information in a drawing file can be represented in DXF format." Criticism Because DXF is only partially and poorly documented, missing documentation of key functionality like blocks & layouts, consideration is often given to alternative open formats like SVG (an open format defined by the W3C), DWF (an open format defined by Autodesk) or even EPS (ISO/IEC standard 29112:2018). DXF (as well as DWG) is however still a prefered format for CAD files for use by the ISO. See also Design Web Format (DWF) Open Design Alliance (originally called OpenDWG) References External links AutoCAD DXF Reference (from Release 14, 1998) (PDF version from 2012) AutoCAD DXF File Format Summary. Annotated example DXF file AutoDesk Online DXF File Viewer. 1982 introductions DXF Autodesk products DXF
2764
https://en.wikipedia.org/wiki/AbiWord
AbiWord
AbiWord () is a free and open-source software word processor. It is written in C++ and since version 3 it is based on GTK+ 3. The name "AbiWord" is derived from the root of the Spanish word "abierto", meaning "open". AbiWord was originally started by SourceGear Corporation as the first part of a proposed AbiSuite but was adopted by open source developers after SourceGear changed its business focus and ceased development. It now runs on Linux, ReactOS, Solaris, AmigaOS 4.0 (through its Cygwin X11 engine), MeeGo (on the Nokia N9 smartphone), Maemo (on the Nokia N810), QNX and other operating systems. Development of a version for Microsoft Windows has temporarily ended due to lack of maintainers (the latest released versions are 2.8.6 and 2.9.4 beta). The macOS port has remained on version 2.4 since 2005, although the current version does run non-natively on macOS through XQuartz. AbiWord is part of the AbiSource project which develops a number of office-related technologies. Abiword is one of the rare text processing software which allows local users to edit simultaneously the same shared document in a local network, without the requirement of an Internet connection, since 2009. Features AbiWord supports both basic word processing features such as lists, indents and character formats, and more sophisticated features including tables, styles, page headers and footers, footnotes, templates, multiple views, page columns, spell checking, and grammar checking. Starting with version 2.8.0, AbiWord includes a collaboration plugin that allows integration with AbiCollab.net, a Web-based service that permits multiple users to work on the same document in real time, in full synchronization. The Presentation view of AbiWord, which permits easy display of presentations created in AbiWord on "screen-sized" pages, is another feature not often found in word processors. Interface AbiWord generally works similarly to classic versions (pre-Office 2007) of Microsoft Word, as direct ease of migration was a high priority early goal. While many interface similarities remain, cloning the Word interface is no longer a top priority. The interface is intended to follow user interface guidelines for each respective platform. File formats AbiWord comes with several import and export filters providing partial support for such formats as HTML, Microsoft Word (.doc), Office Open XML (.docx), OpenDocument Text (.odt), Rich Text Format (.rtf), and text documents (.txt). LaTeX is supported for export only. Plug-in filters are available to deal with many other formats, notably WordPerfect documents. The native file format, .abw, uses XML, so as to mitigate vendor lock-in concerns with respect to interoperability and digital archiving. Grammar checking The AbiWord project includes a US English-only grammar checking plugin using Link Grammar. AbiWord had grammar checking before any other open source word processor, although a grammar checker was later added to OpenOffice.org. Link Grammar is both a theory of syntax and an open source parser which is now developed by the AbiWord project. See also List of free and open-source software packages List of word processors Comparison of word processors Office Open XML software OpenDocument software References External links 1998 software Cross-platform free software Free software programmed in C++ Free word processors Linux word processors MacOS word processors Office software that uses GTK Portable software Software using the GPL license Windows word processors
2778
https://en.wikipedia.org/wiki/Parallel%20ATA
Parallel ATA
Parallel ATA (PATA), originally , also known as IDE, is a standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, and optical disc drives in computers. The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of SATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems. History and terminology The standard was originally conceived as the "AT Bus Attachment," officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Physical ATA interfaces became a standard component in all PCs, initially on host bus adapters, sometimes on a sound card but ultimately as two physical interfaces embedded in a Southbridge chip on a motherboard. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems. They were replaced by SATA interfaces. IDE and ATA-1 The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Compaq Computer (the initial customer), they worked with various disk drive manufacturers to develop and ship early products with the goal of remaining software compatible with the existing IBM PC hard drive interface. The first such drives appeared internally in Compaq PCs in 1986 and were first separately offered by Conner Peripherals as the CP342 in June 1987. The term Integrated Drive Electronics refers to the fact that the drive controller is integrated into the drive, as opposed to a separate controller situated at the other side of the connection cable to the drive. On an IBM PC compatible, CP/M machine, or similar, this was typically a card installed on a motherboard. The interface cards used to connect a parallel ATA drive to, for example, an ISA Slot, are not drive controllers: they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus in disguise, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it. The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus. It has been referred to as "XT-IDE", "XTA" or "XT Attachment". EIDE and ATA-2 In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2". The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants. ATA-2 also was the first to note that devices other than hard drives could be attached to the interface: ATAPI As mentioned in the previous sections, ATA was originally designed for, and worked only with hard disk drives and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee (SFF) allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard disk drives. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol. The Small Form Factor committee approached this problem by defining ATAPI, the "ATA Packet Interface". ATAPI is actually a protocol allowing the ATA interface to carry SCSI commands and responses; therefore, all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. In fact, some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI. ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI. ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview. One commonly used set is defined in the MMC SCSI command set. ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4). UDMA and ATA-4 The ATA/ATAPI-4 standard also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 MByte/s to 33 MByte/second. In later versions, faster Ultra DMA modes were added, requiring new 80-wire cables to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MByte/s. Ultra ATA Ultra ATA, abbreviated UATA, is a designation that has been primarily used by Western Digital for different speed enhancements to the ATA/ATAPI standards. For example, in 2000 Western Digital published a document describing "Ultra ATA/100", which brought performance improvements for the then-current ATA/ATAPI-5 standard by improving maximum speed of the Parallel ATA interface from 66 to 100 MB/s. Most of Western Digital's changes, along with others, were included in the ATA/ATAPI-6 standard (2002). Current terminology The terms "integrated drive electronics" (IDE), "enhanced IDE" and "EIDE" have come to be used interchangeably with ATA (now Parallel ATA, or PATA). In addition, there have been several generations of "EIDE" drives marketed, compliant with various versions of the ATA specification. An early "EIDE" drive might be compatible with ATA-2, while a later one with ATA-6. Nevertheless, a request for an "IDE" or "EIDE" drive from a computer parts vendor will almost always yield a drive that will work with most Parallel ATA interfaces. Another common usage is to refer to the specification version by the fastest mode supported. For example, ATA-4 supported Ultra DMA modes 0 through 2, the latter providing a maximum transfer rate of 33 megabytes per second. ATA-4 drives are thus sometimes called "UDMA-33" drives, and sometimes "ATA-33" drives. Similarly, ATA-6 introduced a maximum transfer speed of 100 megabytes per second, and some drives complying with this version of the standard are marketed as "PATA/100" drives. x86 BIOS size limitations Initially, the size of an ATA drive was stored in the system x86 BIOS using a type number (1 through 45) that predefined the C/H/S parameters and also often the landing zone, in which the drive heads are parked while not in use. Later, a "user definable" format called C/H/S or cylinders, heads, sectors was made available. These numbers were important for the earlier ST-506 interface, but were generally meaningless for ATA—the CHS parameters for later ATA large drives often specified impossibly high numbers of heads or sectors that did not actually define the internal physical layout of the drive at all. From the start, and up to ATA-2, every user had to specify explicitly how large every attached drive was. From ATA-2 on, an "identify drive" command was implemented that can be sent and which will return all drive parameters. Owing to a lack of foresight by motherboard manufacturers, the system BIOS was often hobbled by artificial C/H/S size limitations due to the manufacturer assuming certain values would never exceed a particular numerical maximum. The first of these BIOS limits occurred when ATA drives reached sizes in excess of 504 MiB, because some motherboard BIOSes would not allow C/H/S values above 1024 cylinders, 16 heads, and 63 sectors. Multiplied by 512 bytes per sector, this totals bytes which, divided by bytes per MiB, equals 504 MiB (528 MB). The second of these BIOS limitations occurred at 1024 cylinders, 256 heads, and 63 sectors, and a problem in MS-DOS limited the number of heads to 255. This totals to bytes (8032.5 MiB), commonly referred to as the 8.4 gigabyte barrier. This is again a limit imposed by x86 BIOSes, and not a limit imposed by the ATA interface. It was eventually determined that these size limitations could be overridden with a small program loaded at startup from a hard drive's boot sector. Some hard drive manufacturers, such as Western Digital, started including these override utilities with large hard drives to help overcome these problems. However, if the computer was booted in some other manner without loading the special utility, the invalid BIOS settings would be used and the drive could either be inaccessible or appear to the operating system to be damaged. Later, an extension to the x86 BIOS disk services called the "Enhanced Disk Drive" (EDD) was made available, which makes it possible to address drives as large as 264 sectors. Interface size limitations The first drive interface used 22-bit addressing mode which resulted in a maximum drive capacity of two gigabytes. Later, the first formalized ATA specification used a 28-bit addressing mode through LBA28, allowing for the addressing of 228 () sectors (blocks) of 512 bytes each, resulting in a maximum capacity of 128 GiB (137 GB). ATA-6 introduced 48-bit addressing, increasing the limit to 128 PiB (144 PB). As a consequence, any ATA drive of capacity larger than about 137 GB must be an ATA-6 or later drive. Connecting such a drive to a host with an ATA-5 or earlier interface will limit the usable capacity to the maximum of the interface. Some operating systems, including Windows XP pre-SP1, and Windows 2000 pre-SP3, disable LBA48 by default, requiring the user to take extra steps to use the entire capacity of an ATA drive larger than about 137 gigabytes. Older operating systems, such as Windows 98, do not support 48-bit LBA at all. However, members of the third-party group MSFN have modified the Windows 98 disk drivers to add unofficial support for 48-bit LBA to Windows 95 OSR2, Windows 98, Windows 98 SE and Windows ME. Some 16-bit and 32-bit operating systems supporting LBA48 may still not support disks larger than 2 TiB due to using 32-bit arithmetic only; a limitation also applying to many boot sectors. Primacy and obsolescence Parallel ATA (then simply called ATA or IDE) became the primary storage device interface for PCs soon after its introduction. In some systems, a third and fourth motherboard interface was provided, allowing up to eight ATA devices to be attached to the motherboard. Often, these additional connectors were implemented by inexpensive RAID controllers. Soon after the introduction of Serial ATA (SATA) in 2003, use of Parallel ATA declined. The first motherboards with built-in SATA interfaces usually had only a single PATA connector (for up to two PATA devices), along with multiple SATA connectors. Some PCs and laptops of the era have a SATA hard disk and an optical drive connected to PATA. As of 2007, some PC chipsets, for example the Intel ICH10, had removed support for PATA. Motherboard vendors still wishing to offer Parallel ATA with those chipsets must include an additional interface chip. In more recent computers, the Parallel ATA interface is rarely used even if present, as four or more Serial ATA connectors are usually provided on the motherboard and SATA devices of all types are common. With Western Digital's withdrawal from the PATA market, hard disk drives with the PATA interface were no longer in production after December 2013 for other than specialty applications. Parallel ATA interface Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin female connectors attached to a 40- or 80-conductor ribbon cable. Each cable has two or three connectors, one of which plugs into a host adapter interfacing with the rest of the computer system. The remaining connector(s) plug into storage devices, most commonly hard disk drives or optical drives. Each connector has 39 physical pins arranged into two rows (2.54 mm, -inch pitch), with a gap or key at pin 20. Earlier connectors may not have that gap, with all 40 pins available. Thus, later cables with the gap filled in are incompatible with earlier connectors, although earlier cables are compatible with later connectors. Round parallel ATA cables (as opposed to ribbon cables) were eventually made available for 'case modders' for cosmetic reasons, as well as claims of improved computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications. Pin 20 In the ATA standard, pin 20 is defined as a mechanical key and is not used. This pin's socket on the female connector is often obstructed, requiring pin 20 to be omitted from the male cable or drive connector; it is thus impossible to plug it in the wrong way round. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20. Pin 28 Pin 28 of the gray (slave/middle) connector of an 80-conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors. This enables cable select functionality. Pin 34 Pin 34 is connected to ground inside the blue connector of an 80-conductor cable but not attached to any conductor of the cable, allowing for detection of such a cable. It is attached normally on the gray and black connectors. 44-pin variant A 44-pin variant PATA connector is used for 2.5 inch drives inside laptops. The pins are closer together (2.0 mm pitch) and the connector is physically smaller than the 40-pin connector. The extra pins carry power. 80-conductor variant ATA's cables have had 40 conductors for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives—the extra four for power), but an 80-conductor version appeared with the introduction of the UDMA/66 mode. All of the additional conductors in the new cable are grounds, interleaved with the signal conductors to reduce the effects of capacitive coupling between neighboring signal conductors, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables. Though the number of conductors doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally, the connectors are different; the connectors for the 80-conductor cable connect a larger number of ground conductors to the ground pins, while the connectors for the 40-conductor cable connect ground conductors to ground pins one-to-one. 80-conductor cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-conductor cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select. Differences between connectors The image on the right shows PATA connectors after removal of strain relief, cover, and cable. Pin one is at bottom left of the connectors, pin 2 is top left, etc., except that the lower image of the blue connector shows the view from the opposite side, and pin one is at top right. The connector is an insulation-displacement connector: each contact comprises a pair of points which together pierce the insulation of the ribbon cable with such precision that they make a connection to the desired conductor without harming the insulation on the neighboring conductors. The center row of contacts are all connected to the common ground bus and attach to the odd numbered conductors of the cable. The top row of contacts are the even-numbered sockets of the connector (mating with the even-numbered pins of the receptacle) and attach to every other even-numbered conductor of the cable. The bottom row of contacts are the odd-numbered sockets of the connector (mating with the odd-numbered pins of the receptacle) and attach to the remaining even-numbered conductors of the cable. Note the connections to the common ground bus from sockets 2 (top left), 19 (center bottom row), 22, 24, 26, 30, and 40 on all connectors. Also note (enlarged detail, bottom, looking from the opposite side of the connector) that socket 34 of the blue connector does not contact any conductor but unlike socket 34 of the other two connectors, it does connect to the common ground bus. On the gray connector, note that socket 28 is completely missing, so that pin 28 of the drive attached to the gray connector will be open. On the black connector, sockets 28 and 34 are completely normal, so that pins 28 and 34 of the drive attached to the black connector will be connected to the cable. Pin 28 of the black drive reaches pin 28 of the host receptacle but not pin 28 of the gray drive, while pin 34 of the black drive reaches pin 34 of the gray drive but not pin 34 of the host. Instead, pin 34 of the host is grounded. The standard dictates color-coded connectors for easy identification by both installer and cable maker. All three connectors are different from one another. The blue (host) connector has the socket for pin 34 connected to ground inside the connector but not attached to any conductor of the cable. Since the old 40 conductor cables do not ground pin 34, the presence of a ground connection indicates that an 80 conductor cable is installed. The conductor for pin 34 is attached normally on the other types and is not grounded. Installing the cable backwards (with the black connector on the system board, the blue connector on the remote device and the gray connector on the center device) will ground pin 34 of the remote device and connect host pin 34 through to pin 34 of the center device. The gray center connector omits the connection to pin 28 but connects pin 34 normally, while the black end connector connects both pins 28 and 34 normally. Multiple devices on a cable If two devices are attached to a single cable, one must be designated as Device 0 (in the past, commonly designated master) and the other as Device 1 (in the past, commonly designated as slave). This distinction is necessary to allow both drives to share the cable without conflict. The Device 0 drive is the drive that usually appears "first" to the computer's BIOS and/or operating system. In most personal computers the drives are often designated as "C:" for the Device 0 and "D:" for the Device 1 referring to one active primary partitions on each. The terms device and drive are used interchangeably in the industry, as in master drive or master device. The mode that a device must use is often set by a jumper setting on the device itself, which must be manually set to Device 0 (Master) or Device 1 (Slave). If there is a single device on a cable, it should be configured as Device 0. However, some certain era drives have a special setting called Single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a Single drive on a cable will often work reliably even though configured as the Device 1 drive (most often seen where an optical drive is the only device on the secondary ATA interface). The words primary and secondary typically refers to the two IDE cables, which can have two drives each (primary master, primary slave, secondary master, secondary slave). Cable select A drive mode called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as Device 0 or Device 1, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the Device 0 (master) device; if it sees that pin 28 is open, the device becomes the Device 1 (slave) device. This setting is usually chosen by a jumper setting on the drive called "cable select", usually marked CS, which is separate from the Device 0/1 setting. Note that if two drives are configured as Device 0 and Device 1 manually, this configuration does not need to correspond to their position on the cable. Pin 28 is only used to let the drives know their position on the cable; it is not used by the host when communicating with the drives. In other words, the manual master/slave setting using jumpers on the drives takes precedence and allows them to be freely placed on either connector of the ribbon cable. With the 40-conductor cable, it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors; putting the slave Device 1 device at the end of the cable, and the master Device 0 on the middle connector. This arrangement eventually was standardized in later versions. However, it had one drawback: if there is just one master device on a 2-drive cable, using the middle connector, this results in an unused stub of cable, which is undesirable for physical convenience and electrical reasons. The stub causes signal reflections, particularly at higher transfer rates. Starting with the 80-conductor cable defined for use in ATAPI5/UDMA4, the master Device 0 device goes at the far-from-the-host end of the cable on the black connector, the slave Device 1 goes on the grey middle connector, and the blue connector goes to the host (e.g. motherboard IDE connector, or IDE card). So, if there is only one (Device 0) device on a two-drive cable, using the black connector, there is no cable stub to cause reflections (the unused connector is now in the middle of the ribbon). Also, cable select is now implemented in the grey middle device connector, usually simply by omitting the pin 28 contact from the connector body. Serialized, overlapped, and queued operations The parallel ATA protocols up through ATA-3 require that once a command has been given on an ATA interface, it must complete before any subsequent command may be given. Operations on the devices must be serializedwith only one operation in progress at a timewith respect to the ATA host interface. A useful mental model is that the host ATA interface is busy with the first request for its entire duration, and therefore can not be told about another request until the first one is complete. The function of serializing requests to the interface is usually performed by a device driver in the host operating system. The ATA-4 and subsequent versions of the specification have included an "overlapped feature set" and a "queued feature set" as optional features, both being given the name "Tagged Command Queuing" (TCQ), a reference to a set of features from SCSI which the ATA version attempts to emulate. However, support for these is extremely rare in actual parallel ATA products and device drivers because these feature sets were implemented in such a way as to maintain software compatibility with its heritage as originally an extension of the ISA bus. This implementation resulted in excessive CPU utilization which largely negated the advantages of command queuing. By contrast, overlapped and queued operations have been common in other storage buses; in particular, SCSI's version of tagged command queuing had no need to be compatible with APIs designed for ISA, allowing it to attain high performance with low overhead on buses which supported first party DMA like PCI. This has long been seen as a major advantage of SCSI. The Serial ATA standard has supported native command queueing (NCQ) since its first release, but it is an optional feature for both host adapters and target devices. Many obsolete PC motherboards do not support NCQ, but modern SATA hard disk drives and SATA solid-state drives usually support NCQ, which is not the case for removable (CD/DVD) drives because the ATAPI command set used to control them prohibits queued operations. Two devices on one cable—speed impact There are many debates about how much a slow device can impact the performance of a faster device on the same cable. There is an effect, but the debate is confused by the blurring of two quite different causes, called here "Lowest speed" and "One operation at a time". "Lowest speed" On early ATA host adapters, both devices' data transfers can be constrained to the speed of the slower device, if two devices of different speed capabilities are on the same cable. For all modern ATA host adapters, this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed. Even with earlier adapters without independent timing, this effect applies only to the data transfer phase of a read or write operation. "One operation at a time" This is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. Only one device on a cable can perform a read or write operation at one time; therefore, a fast device on the same cable as a slow device under heavy use will find it has to wait for the slow device to complete its task first. However, most modern devices will report write operations as complete once the data is stored in their onboard cache memory, before the data is written to the (slow) magnetic storage. This allows commands to be sent to the other device on the cable, reducing the impact of the "one operation at a time" limit. The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably will not matter. Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive. HDD passwords and security ATA devices may support an optional security feature which is defined in an ATA specification, and thus not specific to any brand or device. The security feature can be enabled and disabled by sending special ATA commands to the drive. If a device is locked, it will refuse all access until it is unlocked. A device can have two passwords: A User Password and a Master Password; either or both may be set. There is a Master Password identifier feature which, if supported and used, can identify the current Master Password (without disclosing it). A device can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response shows which mode the disk is in: 0 = High, 1 = Maximum. In High security mode, the device can be unlocked with either the User or Master password, using the "SECURITY UNLOCK DEVICE" ATA command. There is an attempt limit, normally set to 5, after which the disk must be power cycled or hard-reset before unlocking can be attempted again. Also in High security mode, the SECURITY ERASE UNIT command can be used with either the User or Master password. In Maximum security mode, the device can be unlocked only with the User password. If the User password is not available, the only remaining way to get at least the bare hardware back to a usable state is to issue the SECURITY ERASE PREPARE command, immediately followed by SECURITY ERASE UNIT. In Maximum security mode, the SECURITY ERASE UNIT command requires the Master password and will completely erase all data on the disk. Word 89 in the IDENTIFY response indicates how long the operation will take. While the ATA lock is intended to be impossible to defeat without a valid password, there are purported workarounds to unlock a device. For sanitizing entire disks the built-in Secure Erase command is effective when implemented correctly. There have been a few reported instances of failures to erase some or all data. External parallel ATA devices Due to a short cable length specification and shielding issues it is extremely uncommon to find external PATA devices that directly use PATA for connection to a computer. A device connected externally needs additional cable length to form a U-shaped bend so that the external device may be placed alongside, or on top of the computer case, and the standard cable length is too short to permit this. For ease of reach from motherboard to device, the connectors tend to be positioned towards the front edge of motherboards, for connection to devices protruding from the front of the computer case. This front-edge position makes extension out the back to an external device even more difficult. Ribbon cables are poorly shielded, and the standard relies upon the cabling to be installed inside a shielded computer case to meet RF emissions limits. External hard disk drives or optical disk drives that have an internal PATA interface, use some other interface technology to bridge the distance between the external device and the computer. USB is the most common external interface, followed by Firewire. A bridge chip inside the external devices converts from the USB interface to PATA, and typically only supports a single external device without cable select or master/slave. Compact Flash interface Compact Flash in its IDE mode is essentially a miniaturized ATA interface, intended for use on devices that use flash memory storage. No interfacing chips or circuitry are required, other than to directly adapt the smaller CF socket onto the larger ATA connector. (Although most CF cards only support IDE mode up to PIO4, making them much slower in IDE mode than their CF capable speed) The ATA connector specification does not include pins for supplying power to a CF device, so power is inserted into the connector from a separate source. The exception to this is when the CF device is connected to a 44-pin ATA bus designed for 2.5-inch hard disk drives, commonly found in notebook computers, as this bus implementation must provide power to a standard hard disk drive. CF devices can be designated as devices 0 or 1 on an ATA interface, though since most CF devices offer only a single socket, it is not necessary to offer this selection to end users. Although CF can be hot-pluggable with additional design methods, by default when wired directly to an ATA interface, it is not intended to be hot-pluggable. ATA standards versions, transfer rates, and features The following table shows the names of the versions of the ATA standards and the transfer modes and rates supported by each. Note that the transfer rate for each mode (for example, 66.7 MB/s for UDMA4, commonly called "Ultra-DMA 66", defined by ATA-5) gives its maximum theoretical transfer rate on the cable. This is simply two bytes multiplied by the effective clock rate, and presumes that every clock cycle is used to transfer end-user data. In practice, of course, protocol overhead reduces this value. Congestion on the host bus to which the ATA adapter is attached may also limit the maximum burst transfer rate. For example, the maximum data transfer rate for conventional PCI bus is 133 MB/s, and this is shared among all active devices on the bus. In addition, no ATA hard drives existed in 2005 that were capable of measured sustained transfer rates of above 80 MB/s. Furthermore, sustained transfer rate tests do not give realistic throughput expectations for most workloads: They use I/O loads specifically designed to encounter almost no delays from seek time or rotational latency. Hard drive performance under most workloads is limited first and second by those two factors; the transfer rate on the bus is a distant third in importance. Therefore, transfer speed limits above 66 MB/s really affect performance only when the hard drive can satisfy all I/O requests by reading from its internal cache—a very unusual situation, especially considering that such data is usually already buffered by the operating system. , mechanical hard disk drives can transfer data at up to 524 MB/s, which is far beyond the capabilities of the PATA/133 specification. High-performance solid state drives can transfer data at up to 7000–7500 MB/s. Only the Ultra DMA modes use CRC to detect errors in data transfer between the controller and drive. This is a 16-bit CRC, and it is used for data blocks only. Transmission of command and status blocks do not use the fast signaling methods that would necessitate CRC. For comparison, in Serial ATA, 32-bit CRC is used for both commands and data. Features introduced with each ATA revision Speed of defined transfer modes Related standards, features, and proposals ATAPI Removable Media Device (ARMD) ATAPI devices with removable media, other than CD and DVD drives, are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system. These can be supported as bootable devices by a BIOS complying with the ATAPI Removable Media Device BIOS Specification, originally developed by Compaq Computer Corporation and Phoenix Technologies. It specifies provisions in the BIOS of a personal computer to allow the computer to be bootstrapped from devices such as Zip drives, Jaz drives, SuperDisk (LS-120) drives, and similar devices. These devices have removable media like floppy disk drives, but capacities more commensurate with hard drives, and programming requirements unlike either. Due to limitations in the floppy controller interface most of these devices were ATAPI devices, connected to one of the host computer's ATA interfaces, similarly to a hard drive or CD-ROM device. However, existing BIOS standards did not support these devices. An ARMD-compliant BIOS allows these devices to be booted from and used under the operating system without requiring device-specific code in the OS. A BIOS implementing ARMD allows the user to include ARMD devices in the boot search order. Usually an ARMD device is configured earlier in the boot order than the hard drive. Similarly to a floppy drive, if bootable media is present in the ARMD drive, the BIOS will boot from it; if not, the BIOS will continue in the search order, usually with the hard drive last. There are two variants of ARMD, ARMD-FDD and ARMD-HDD. Originally ARMD caused the devices to appear as a sort of very large floppy drive, either the primary floppy drive device 00h or the secondary device 01h. Some operating systems required code changes to support floppy disks with capacities far larger than any standard floppy disk drive. Also, standard-floppy disk drive emulation proved to be unsuitable for certain high-capacity floppy disk drives such as Iomega Zip drives. Later the ARMD-HDD, ARMD-"Hard disk device", variant was developed to address these issues. Under ARMD-HDD, an ARMD device appears to the BIOS and the operating system as a hard drive. ATA over Ethernet In August 2004, Sam Hopkins and Brantley Coile of Coraid specified a lightweight ATA over Ethernet protocol to carry ATA commands over Ethernet instead of directly connecting them to a PATA host adapter. This permitted the established block protocol to be reused in storage area network (SAN) applications. See also Advanced Host Controller Interface (AHCI) CE-ATA Consumer Electronics (CE) ATA FATA (hard drive) INT 13H for BIOS Enhanced Disk Drive Specification (SFF-8039i) IT8212, a low-end Parallel ATA controller Master/slave (technology) List of device bandwidths References External links CE-ATA Workgroup AT Attachment Computer storage buses Computer connectors Computer hardware standards
2779
https://en.wikipedia.org/wiki/Atari%202600
Atari 2600
The Atari 2600 is a home video game console developed and produced by Atari, Inc. Released in September 1977, it popularized microprocessor-based hardware and games stored on swappable ROM cartridges, a format first used with the Fairchild Channel F in 1976. Branded as the Atari Video Computer System (Atari VCS) from its release until November 1982, the VCS was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridgeinitially Combat and later Pac-Man. Atari was successful at creating arcade video games, but their development cost and limited lifespan drove CEO Nolan Bushnell to seek a programmable home system. The first inexpensive microprocessors from MOS Technology in late 1975 made this feasible. The console was prototyped as codename Stella by Atari subsidiary Cyan Engineering. Lacking funding to complete the project, Bushnell sold Atari to Warner Communications in 1976. The Atari VCS launched in 1977 with nine simple, low-resolution games in 2 KB cartridges. The system's first killer app was the home conversion of Taito's arcade game Space Invaders in 1980. The VCS became widely successful, leading to the founding of Activision and other third-party game developers and to competition from console manufacturers Mattel and Coleco. By the end of its primary lifecycle in 1983–84, games for the 2600 were using more than four times the storage size of the launch games with significantly more advanced visuals and gameplay than the system was designed for, such as Activision's Pitfall! By 1982, the Atari 2600 was the dominant game system in North America. However, it saw competition from other consoles such as the Intellivision and ColecoVision, and poor decisions by Atari management damaged both the system and company's reputation, most notably the release of two highly anticipated games for the 2600: a port of the arcade game Pac-Man and E.T. the Extra-Terrestrial. Pac-Man became the 2600's highest-selling game, but was panned for being inferior to the arcade version. E.T. was rushed to market for the holiday shopping season and was similarly panned and became a commercial failure. Both games, and a glut of third-party shovelware, were factors in ending Atari's relevance in the console market, contributing to the video game crash of 1983. Warner sold Atari's home division to former Commodore CEO Jack Tramiel in 1984. In 1986, the new Atari Corporation under Tramiel released a lower-cost version of the 2600 and the backward-compatible Atari 7800, but it was Nintendo that led the recovery of the industry with its 1985 launch of the Nintendo Entertainment System. Production of the Atari 2600 ended on January 1, 1992, with an estimated 30 million units sold across its lifetime. History Atari, Inc. was founded by Nolan Bushnell and Ted Dabney in 1972. Its first major product was Pong, released in 1972, the first successful coin-operated video game. While Atari continued to develop new arcade games in following years, Pong gave rise to a number of competitors to the growing arcade game market. The competition along with other missteps by Atari led to financial problems in 1974, though recovering by the end of the year. By 1975, Atari had released a Pong home console, competing against Magnavox, the only other major producer of home consoles at the time. Atari engineers recognized, however, the limitation of custom logic integrated onto the circuit board, permanently confining the whole console to only one game. The increasing competition increased the risk, as Atari had found with past arcade games and again with dedicated home consoles. Both platforms are built from integrating discrete electro-mechanical components into circuits, rather than programmed as on a mainframe computer. Therefore, development of a console had cost at least plus time to complete, but the final product only had about a three-month shelf life until becoming outdated by competition. By 1974, Atari had acquired Cyan Engineering, a Grass Valley electronics company founded by Steve Mayer and Larry Emmons, both former colleagues of Bushnell and Dabney from Ampex, who helped to develop new ideas for Atari's arcade games. Even prior to the release of the home version of Pong, Cyan's engineers, led by Mayer and Ron Milner, had envisioned a home console powered by new programmable microprocessors capable of playing Atari's current arcade offerings. The programmable microprocessors would make a console's design significantly simpler and more powerful than any dedicated single-game unit. However, the cost of such chips was far outside the range that their market would tolerate. Atari had opened negotiations to use Motorola's new 6800 in future systems. MOS Technology 6502/6507 In September 1975, MOS Technology debuted the 6502 microprocessor for at the Wescon trade show in San Francisco. Mayer and Milner attended, and met with the leader of the team that created the chip, Chuck Peddle. They proposed using the 6502 in a game console, and offered to discuss it further at Cyan's facilities after the show. Over two days, MOS and Cyan engineers sketched out a 6502-based console design by Meyer and Milner's specifications. Financial models showed that even at , the 6502 would be too expensive, and Peddle offered them a planned 6507 microprocessor, a cost-reduced version of the 6502, and MOS's RIOT chip for input/output. Cyan and MOS negotiated the 6507 and RIOT chips at a pair. MOS also introduced Cyan to Microcomputer Associates, who had separately developed debugging software and hardware for MOS, and had developed the JOLT Computer for testing the 6502, which Peddle suggested would be useful for Atari and Cyan to use while developing their system. Milner was able to demonstrate a proof-of-concept for a programmable console by implementing Tank, an arcade game by Atari's subsidiary Kee Games, on the JOLT. As part of the deal, Atari wanted a second source of the chipset. Peddle and Paivinen suggested Synertek whose co-founder, Bob Schreiner, was a friend of Peddle. In October 1975, Atari informed the market that it was moving forward with MOS. The Motorola sales team had already told its management that the Atari deal was finalized, and Motorola management was livid. They announced a lawsuit against MOS the next week. Building the system By December 1975, Atari hired Joe Decuir, a recent graduate from University of California, Berkeley who had been doing his own testing on the 6502. Decuir began debugging the first prototype designed by Mayer and Milner, which gained the codename "Stella" after the brand of Decuir's bicycle. This prototype included a breadboard-level design of the graphics interface to build upon. A second prototype was completed by March 1976 with the help of Jay Miner, who created a chip called the Television Interface Adaptor (TIA) to send graphics and audio to a television. The second prototype included a TIA, a 6507, and a ROM cartridge slot and adapter. As the TIA's design was refined, Al Alcorn brought in Atari's game developers to provide input on features. There are significant limitations in the 6507, the TIA, and other components, so the programmers creatively optimized their games to maximize the console. The console lacks a framebuffer and requires games to instruct the system to generate graphics in synchronization with the electron gun in the cathode-ray tube (CRT) as it scans across rows on the screen. The programmers found ways to "race the beam" to perform other functions while the electron gun scans outside of the visible screen. Alongside the electronics development, Bushnell brought in Gene Landrum, a consultant who had just prior consulted for Fairchild Camera and Instrument for its upcoming Channel F, to determine the consumer requirements for the console. In his final report, Landrum suggested a living room aesthetic, with a wood grain finish, and the cartridges must be "idiot proof, child proof and effective in resisting potential static [electricity] problems in a living room environment". Landrum recommended it include four to five dedicated games in addition to the cartridges, but this was dropped in the final designs. The cartridge design was done by James Asher and Douglas Hardy. Hardy had been an engineer for Fairchild and helped in the initial design of the Channel F cartridges, but he quit to join Atari in 1976. The interior of the cartridge that Asher and Hardy designed was sufficiently different to avoid patent conflicts, but the exterior components were directly influenced by the Channel F to help work around the static electricity concerns. Atari was still recovering from its 1974 financial woes and needed additional capital to fully enter the home console market, though Bushnell was wary of being beholden to outside financial sources. Atari obtained smaller investments through 1975, but not at the scale it needed, and began considering a sale to a larger firm by early 1976. Atari was introduced to Warner Communications, which saw the potential for the growing video game industry to help offset declining profits from its film and music divisions. Negotiations took place during 1976, during which Atari cleared itself of liabilities, including settling a patent infringement lawsuit with Magnavox over Ralph H. Baer's patents that were the basis for the Magnavox Odyssey. In mid-1976, Fairchild announced the Channel F, planned for release later that year, beating Atari to the market. By October 1976, Warner and Atari agreed to the purchase of Atari for . Warner provided an estimated which was enough to fast-track Stella. By 1977, development had advanced enough to brand it the "Atari Video Computer System" (VCS) and start developing games. Launch and success The unit was showcased on June 4, 1977, at the Summer Consumer Electronics Show with plans for retail release in October. The announcement was purportedly delayed to wait out the terms of the Magnavox patent lawsuit settlement, which would have given Magnavox all technical information on any of Atari's products announced between June 1, 1976, and June 1, 1977. However, Atari encountered production problems during its first batch, and its testing was complicated by the use of cartridges. The Atari VCS was launched in September 1977 at , with two joysticks and a Combat cartridge; eight additional games were sold separately. Most of the launch games were based on arcade games developed by Atari or its subsidiary Kee Games: for example, Combat was based on Kee's Tank (1974) and Atari's Jet Fighter (1975). Atari sold between 350,000 and 400,000 Atari VCS units during 1977, attributed to the delay in shipping the units and consumers' unfamiliarity with a swappable-cartridge console that is not dedicated to only one game. In 1978, Atari sold only 550,000 of the 800,000 systems manufactured. This required further financial support from Warner to cover losses. Atari sold 1 million consoles in 1979, particularly during the holiday season, but there was new competition from the Mattel Electronics Intellivision and Magnavox Odyssey², which also use swappable ROM cartridges. The 2019 book They Create Worlds has Atari selling about 600,000 VCS systems in 1979, bringing the installed base to a little over 1.3 million. Atari obtained a license from Taito to develop a VCS conversion of its 1978 arcade hit Space Invaders. This is the first officially licensed arcade conversion for a home console. Its release in March 1980 doubled the console's sales for the year to more than 2 million units, and was considered the Atari VCS' killer application. Sales then doubled again for the next two years. The book They Create Worlds has Atari selling 1.25 million Space Invaders cartridges and over 1 million VCS systems in 1980, nearly doubling the install base to over 2 million, and then an estimated 3.1 million VCS systems in 1981. By 1982, 10 million consoles had been sold in the United States, while its best-selling game was Pac-Man at over copies sold by 1990. Pac-Man propelled worldwide Atari VCS sales to units during 1982, according to a November 1983 article in InfoWorld magazine. An August 1984 InfoWorld magazine article says more than Atari 2600 machines are sold by 1982. A March 1983 article in IEEE Spectrum magazine has about 3 million VCS sales in 1981, about 5.5 million in 1982, as well as a total of over 12 million VCS systems and estimated 120 million cartridges sold. In Europe, the Atari VCS sold 125,000 units in the United Kingdom during 1980, and 450,000 in West Germany by 1984. In France, where the VCS released in 1982, the system sold 600,000 units by 1989. The console was distributed by Epoch Co. in Japan in 1979 under the name "Cassette TV Game", but not sell as well as Epoch's own Cassette Vision system in 1981. In 1982, Atari launched its second programmable console, the Atari 5200. To standardize naming, the VCS was renamed to the "Atari 2600 Video Computer System", or "Atari 2600", derived from the manufacture part number CX2600. By 1982, the 2600 cost Atari about to make and was sold for an average of . The company spent .50 to to manufacture each cartridge, plus to for advertising, wholesaling for . Third-party development Activision, formed by Crane, Whitehead, and Miller in 1979, started developing third-party VCS games using their knowledge of VCS design and programming tricks, and began releasing games in 1980. Kaboom! (1981) and Pitfall! (1982) are among the most successful with at least one and four million copies sold, respectively. In 1980, Atari attempted to block the sale of the Activision cartridges, accusing the four of intellectual property infringement. The two companies settled out of court, with Activision agreeing to pay Atari a licensing fee for their games. This made Activision the first third-party video game developer and established the licensing model that continues to be used by console manufacturers for game development. Activision's success led to the establishment of other third-party VCS game developers following Activision's model in the early 1980s, including U.S. Games, Telesys, Games by Apollo, Data Age, Zimag, Mystique, and CommaVid. The founding of Imagic included ex-Atari programmers. Mattel and Coleco, each already producing its own more advanced console, created simplified versions of their existing games for the 2600. Mattel used the M Network brand name for its cartridges. Third-party games accounted for half of VCS game sales by 1982. Decline and redesign In addition to third-party game development, Atari also received the first major threat to its hardware dominance from the Colecovision. Coleco had a license from Nintendo to develop a version of the arcade game Donkey Kong (1981), which was bundled with every Colecovision console. Coleco gained about 17% of the hardware market in 1982 compared to Atari's 58%. With third parties competing for market share, Atari worked to maintain dominance in the market by acquiring licenses for popular arcade games and other properties to make games from. Pac-Man has numerous technical and aesthetic flaws, but nevertheless more than 7 million copies were sold. Heading into the 1982 holiday shopping season, Atari had placed high sales expectations on E.T. the Extra-Terrestrial, a game programmed in about six weeks. Atari produced an estimated four million cartridges, but the game was poorly reviewed, and only about 1.5 million units were sold. Warner Communications reported weaker results than expected in December 1982 to its shareholders, having expected a 50% year-to-year growth but only obtaining 10–15% due to declining sales at Atari. Coupled with the oversaturated home game market, Atari's weakened position led investors to start pulling funds out of video games, beginning a cascade of disastrous effects known as the video game crash of 1983. Many of the third-party developers formed prior to 1983 were closed, and Mattel and Coleco left the video game market by 1985. In September 1983, Atari sent 14 truckloads of unsold Atari 2600 cartridges and other equipment to a landfill in the New Mexico desert, later labeled the Atari video game burial. Long considered an urban legend that claimed the burial contained millions of unsold cartridges, the site was excavated in 2014, confirming reports from former Atari executives that only about 700,000 cartridges had actually been buried. Atari reported a loss for 1983 as a whole, and continued to lose money into 1984, with a loss reported in the second quarter. By mid-1984, software development for the 2600 had essentially stopped except that of Atari and Activision. Warner, wary of supporting its failing Atari division, started looking for buyers in 1984. Warner sold most of Atari to Jack Tramiel, the founder of Commodore International, in July 1984 for about , though Warner retained Atari's arcade business. Tramiel was a proponent of personal computers, and halted all new 2600 game development soon after the sale. The North American video game market did not recover until about 1986, after Nintendo's 1985 launch of the Nintendo Entertainment System in North America. Atari Corporation released a redesigned model of the 2600 in 1986, supported by an ad campaign touting a price of "under 50 bucks". With a large library of cartridges and a low price point, the 2600 continued to sell into the late 1980s. Atari released the last batch of games in 1989–90 including Secret Quest and Fatal Run. By 1986, over Atari VCS units had been sold worldwide. The final Atari-licensed release is the PAL-only version of the arcade game KLAX in 1990. After more than 14 years on the market, the 2600 line was formally discontinued on January 1, 1992, along with the Atari 7800 and Atari 8-bit family of home computers. In Europe, last stocks of the 2600 were sold until Summer/Fall of 1995. Hardware Console The Atari 2600's CPU is the MOS Technology 6507, a version of the 6502, running at 1.19 MHz in the 2600. Though their internal silicon was identical, the 6507 was cheaper than the 6502 because its package included fewer memory-address pins—13 instead of 16. The designers of the Atari 2600 selected an inexpensive cartridge interface that has one fewer address pins than the 13 allowed by the 6507, further reducing the already limited addressable memory from 8 KB (213 = 8,192) to 4 KB (212 = 4,096). This was believed to be sufficient as Combat is itself only 2 KB. Later games circumvented this limitation with bank switching. The console has 128 bytes of RAM for scratch space, the call stack, and the state of the game environment. The top bezel of the console originally had six switches: power, TV type selection (color or black-and-white), game selection, player difficulty, and game reset. The difficulty switches were moved to the back of the bezel in later versions of the console. The back bezel also included the controller ports, TV output, and power input. Graphics The Atari 2600 was designed to be compatible with the cathode-ray tube television sets produced in the late 1970s and early 1980s, which commonly lack auxiliary video inputs to receive audio and video from another device. Therefore, to connect to a TV, the console generates a radio frequency signal compatible with the regional television standards (NTSC, PAL, or SECAM), using a special switch box to act as the television's antenna. Atari developed the Television Interface Adaptor (TIA) chip in the VCS to handle the graphics and conversion to a television signal. It provides a single-color, 20-bit background register that covers the left half of the screen (each bit represents 4 adjacent pixels) and is either repeated or reflected on the right side. There are 5 single-color sprites: two 8-pixel wide players; two 1 bit missiles, which share the same colors as the players; and a 1-pixel ball, which shares the background color. The 1-bit sprites all can be controlled to stretch to 1, 2, 4, or 8 pixels. The system was designed without a frame buffer to avoid the cost of the associated RAM. The background and sprites apply to a single scan line, and as the display is output to the television, the program can change colors, sprite positions, and background settings. The careful timing required to sync the code to the screen on the part of the programmer was labeled "racing the beam"; the actual game logic runs when the television beam is outside of the visible area of the screen. Early games for the system use the same visuals for pairs of scan lines, giving a lower vertical resolution, to allow more time for the next row of graphics to be prepared. Later games, such as Pitfall!, change the visuals for each scan line or extend the black areas around the screen to extend the game code's processing time. Regional releases of the Atari 2600 use modified TIA chips for each region's television formats, which require games to be developed and published separately for each region. All modes are 160 pixels wide. NTSC mode provides 192 visible lines per screen, drawn at 60 Hz, with 16 colors, each at 8 levels of brightness. PAL mode provides more vertical scanlines, with 228 visible lines per screen, but drawn at 50 Hz and only 13 colors. SECAM mode, also a 50 Hz format, is limited to 8 colors, each with only a single brightness level. Controllers The first VCS bundle has two types of controllers: a joystick (part number CX10) and pair of rotary paddle controllers (CX30). Driving controllers, which are similar to paddle controllers but can be continuously rotated, shipped with the Indy 500 launch game. After less than a year, the CX10 joystick was replaced with the CX40 model designed by James C. Asher. Because the Atari joystick port and CX40 joystick became industry standards, 2600 joysticks and some other peripherals work with later systems, including the MSX, Commodore 64, Amiga, Atari 8-bit family, and Atari ST. The CX40 joystick can be used with the Master System and Sega Genesis, but does not provide all the buttons of a native controller. Third-party controllers include Wico's Command Control joystick. Later, the CX42 Remote Control Joysticks, similar in appearance but using wireless technology, were released, together with a receiver whose wires could be inserted in the controller jacks. Atari introduced the CX50 Keyboard Controller in June 1978 along with two games that require it: Codebreaker and Hunt & Score. The similar, but simpler, CX23 Kid's Controller was released later for a series of games aimed at a younger audience. The CX22 Trak-Ball controller was announced in January 1983 and is compatible with the Atari 8-bit family. There were two attempts to turn the Atari 2600 into a keyboard-equipped home computer: Atari's never-released CX3000 "Graduate" keyboard, and the CompuMate keyboard by Spectravideo which was released in 1983. Console models Minor revisions The initial production of the VCS was made in Sunnyvale during 1977, using thick polystyrene plastic for the casing as to give the impression of weight from what was mostly an empty shell inside. The initial Sunnyvale batch had also included potential mounts for an internal speaker system on the casing, though the speakers were found to be too expensive to include and instead sound was routed through the TIA to the connected television. All six console switches on the front panel. Production of the unit was moved to Taiwan in 1978, where a less thick internal metal shielding was used and thinner plastic was used for the casing, reducing the system's weight. These two versions are commonly referred to as "Heavy Sixers" and "Light Sixers" respectively, referencing the six front switches. In 1980, the difficulty switches were moved to the back of the console, leaving four switches on the front. Otherwise, these four-switch consoles look nearly identical to the earlier six-switch models. In 1982 Atari rebranded the console as the "Atari 2600", a name first used on a version of the four-switch model without woodgrain, giving it an all-black appearance. Sears Video Arcade Atari continued its OEM relationship with Sears under the latter's Tele-Games brand, which started in 1975 with the original Pong. This is unrelated to the company Telegames, which later produced 2600 cartridges. Sears released several models of the VCS as the Sears Video Arcade series starting in 1977. In 1983, the previously Japan-only Atari 2800 was rebranded as the Sears Video Arcade II. Sears released versions of Atari's games with Tele-Games branding, usually with different titles. Three games were produced by Atari for Sears as exclusive releases: Steeplechase, Stellar Track, and Submarine Commander. Atari 2800 The Atari 2800 is the Japanese version of the 2600 released in October 1983. It is the first Japan-specific release of a 2600, though companies like Epoch had distributed the 2600 in Japan previously. The 2800 was released a short time after Nintendo's Family Computer (which became the dominant console in Japan), and it did not gain a significant share of the market. Sears previously released the 2800 in the US during late 1982 as the Sears Video Arcade II, which came packaged with two controllers and Space Invaders. Around 30 specially branded games were released for the 2800. Designed by engineer Joe Tilly, the 2800 has four controller ports instead of the two of the 2600. The controllers are an all-in one design using a combination of an 8-direction digital joystick and a 270-degree paddle, designed by John Amber. The 2800's case design departed from the 2600, using a wedge shape with non-protruding switches. The case style is the basis for the Atari 7800, which was redesigned for the 7800 by Barney Huang. Atari 2600 Jr. The 1986 model has a smaller, cost-reduced form factor with an Atari 7800-like appearance. It was advertised as a budget gaming system (under ) with the ability to run a large collection of games. Released after the video game crash of 1983, and after the North American launch of the Nintendo Entertainment System, the 2600 was supported with new games and television commercials promoting "The fun is back!". Atari released several minor stylistic variations: the "large rainbow" (shown), "short rainbow", and an all-black version sold only in Ireland. Later European versions include a joypad. Unreleased prototypes The Atari 2700 was a version of the 2600 with wireless controllers. The CX2000, with integrated joystick controllers, was a redesign based on human factor analysis by Henry Dreyfuss Associates. The circa-1982 Atari 3200 was a backward compatible 2600 successor. Related hardware The Atari 7800, announced in 1984 and released in 1986, is the official successor to the Atari 2600 and is backward compatible with 2600 cartridges. Multiple microconsoles are based on the Atari 2600: The TV Boy includes 127 games in an enlarged joypad. The Atari Classics 10-in-1 TV Game, manufactured by Jakks Pacific, emulates the 2600 with ten games inside a Atari-style joystick with composite-video output. The Atari Flashback 2 (2005) contains 40 games, with four additional programs unlocked by a cheat code. It is compatible with original 2600 controllers and can be modified to play original 2600 cartridges. In 2017, Hyperkin announced the RetroN 77, a clone of the Atari 2600 that plays original cartridges instead of preinstalled games. The Atari VCS (2021 console) can download and emulate 2600 games via an online store. Atari, Inc. plans to release the Atari 2600+, an 80% scale replica of the 1980 CX2600-A model, on November 17, 2023. The 2600+ includes support for original Atari 2600 and 7200 cartridges. Games In 1977, nine games were released on cartridge to accompany the launch of the console: Air-Sea Battle, Basic Math, Blackjack, Combat, Indy 500, Star Ship, Street Racer, Surround, and Video Olympics. Indy 500 shipped with special "driving controllers", which are like paddles but rotate freely. Street Racer and Video Olympics use the standard paddle controllers. Atari, Inc. was the only developer for the first few years, releasing dozens of games. Atari determined that box art featuring only descriptions of the game and screenshots would not be sufficient to sell games in retail stores, since most games were based on abstract principles and screenshots give little information. Atari outsourced box art to Cliff Spohn, who created visually interesting artwork with implications of dynamic movement intended to engage the player's imagination while staying true to the gameplay. Spohn's style became a standard for Atari when bringing in assistant artists, including Susan Jaekel, Rick Guidice, John Enright, and Steve Hendricks. Spohn and Hendricks were the largest contributors to the covers in the Atari 2600 library. Ralph McQuarrie, a concept artist on the Star Wars series, was commissioned for one cover, the arcade conversion of Vanguard. These artists generally conferred with the programmer to learn about the game before drawing the art. An Atari VCS port of the Breakout arcade game appeared in 1978. The original is in black and white with a colored overlay, and the home version is in color. In 1980, Atari released Adventure, the first action-adventure game, and the first home game with a hidden Easter egg. Rick Maurer's port of Taito's Space Invaders, released in 1980, is the first VCS game to have more than one million copies sold—eventually doubling that within a year and totaling more than cartridges by 1983. It became the killer app to drive console sales. Versions of Atari's own Asteroids and Missile Command arcade games, released in 1981, were also major hits. Each early VCS game is in a 2K ROM. Later games, like Space Invaders, have 4K. The VCS port of Asteroids (1981) is the first game for the system to use 8K via a bank switching technique between two 4K segments. Some later releases, including Atari's ports of Dig Dug and Crystal Castles, are 16K cartridges. One of the final games, Fatal Run (1990), doubled this to 32K. Two Atari-published games, both from the system's peak in 1982, E.T. the Extra-Terrestrial and Pac-Man, are cited as factors in the video game crash of 1983. A company named American Multiple Industries produced a number of pornographic games for the 2600 under the Mystique Presents Swedish Erotica label. The most notorious, Custer's Revenge, was protested by women's and Native American groups because it depicted General George Armstrong Custer raping a bound Native American woman. Atari sued American Multiple Industries in court over the release of the game. Legacy The 2600 was so successful in the late 1970s and early 1980s that "Atari" was a synonym for the console in mainstream media and for video games in general. Jay Miner directed the creation of the successors to the 2600's TIA chip—CTIA and ANTIC—which are central to the Atari 8-bit computers released in 1979 and later the Atari 5200 console. The Atari 2600 was inducted into the National Toy Hall of Fame at The Strong in Rochester, New York, in 2007. In 2009, the Atari 2600 was named the number two console of all time by IGN, which cited its remarkable role behind both the first video game boom and the video game crash of 1983, and called it "the console that our entire industry is built upon". In November 2021, the current incarnation of Atari announced three 2600 games to be published under "Atari XP" label: Yars' Return, Aquaventure, and Saboteur. These were previously included in Atari Flashback consoles. Notes References Citations General bibliography External links A history of the Atari VCS/2600 Inside the Atari 2600 Hardware and prototypes at the Atari Museum 1970s toys 1980s toys 2600 Computer-related introductions in 1977 Home video game consoles Products and services discontinued in 1992 Second-generation video game consoles 65xx-based video game consoles Discontinued video game consoles
2780
https://en.wikipedia.org/wiki/Atari%205200
Atari 5200
The Atari 5200 SuperSystem or simply Atari 5200 is a home video game console introduced in 1982 by Atari, Inc. as a higher-end complement for the popular Atari Video Computer System. The VCS was renamed to the Atari 2600 at the time of the 5200's launch. Created to compete with Mattel's Intellivision, the 5200 wound up a direct competitor of ColecoVision shortly after its release. While the Coleco system shipped with the first home version of Nintendo's Donkey Kong, the 5200 included the 1978 arcade game Super Breakout which had already appeared on the Atari 8-bit family and Atari VCS in 1979 and 1981 respectively. The CPU and the graphics and sound hardware are almost identical to that of the Atari 8-bit computers, although software is not directly compatible between the two systems. The 5200's controllers have an analog joystick and a numeric keypad along with start, pause, and reset buttons. The 360-degree non-centering joystick was touted as offering more control than the eight-way Atari CX40 joystick of the 2600, but was a focal point for criticism. On May 21, 1984, during a press conference at which the Atari 7800 was introduced, company executives revealed that the 5200 had been discontinued after less than two years on the market. Total sales of the 5200 were reportedly in excess of 1 million units, far short of its predecessor's sales of over 30 million. Hardware Much of the technology in the Atari 8-bit family of home computer was originally developed as a second-generation games console intended to replace the Atari Video Computer System console. However, as the system was reaching completion, the personal computer revolution was starting with the release of machines like the Commodore PET, TRS-80, and Apple II. These machines had less advanced hardware than the new Atari technology, but sold for much higher prices with associated higher profit margins. Atari's management decided to enter this market, and the technology was repackaged into the Atari 400 and 800. The chipset used in these machines was created with the mindset that the VCS would likely be obsolete by 1980. Atari later decided to re-enter the games market with a design that closely matched their original 1978 specifications. In its prototype stage, the Atari 5200 was originally called the "Atari Video System X – Advanced Video Computer System", and was codenamed "Pam" after a female employee at Atari, Inc. It is also rumored that PAM actually stood for "Personal Arcade Machine", as the majority of games for the system ended up being arcade conversions. Actual working Atari Video System X machines, whose hardware is 100% identical to the Atari 5200 do exist, but are extremely rare. The initial 1982 release of the system featured four controller ports, where nearly all other systems of the day had only one or two ports. The 5200 also featured a new style of controller with an analog joystick, numeric keypad, two fire buttons on each side of the controller and game function keys for Start, Pause, and Reset. The 5200 also featured the innovation of the first automatic TV switchbox, allowing it to automatically switch from regular TV viewing to the game system signal when the system was activated. Previous RF adapters required the user to slide a switch on the adapter by hand. The RF box was also where the power supply connected in a unique dual power/television signal setup similar to the RCA Studio II's. A single cable coming out of the 5200 plugged into the switch box and carried both electricity and the television signal. The 1983 revision of the Atari 5200 has two controller ports instead of four, and a change back to the more conventional separate power supply and standard non-autoswitching RF switch. It also has changes in the cartridge port address lines to allow for the Atari 2600 adapter released that year. While the adapter was only made to work on the two-port version, modifications can be made to the four-port to make it line-compatible. In fact, towards the end of the four-port model's production run, there were a limited number of consoles produced which included these modifications. These consoles can be identified by an asterisk in their serial numbers. At one point following the 5200's release, Atari planned a smaller, cost-reduced version of the Atari 5200, which removed the controller storage bin. Code-named the "Atari 5100" (a.k.a. "Atari 5200 Jr."), only a few fully working prototype 5100s were made before the project was canceled. Controllers The controller prototypes used in the electrical development lab employed a yoke-and-gimbal mechanism that came from an RC airplane controller kit. The design of the analog joystick, which used a weak rubber boot rather than springs to provide centering, proved to be ungainly and unreliable. They quickly became the Achilles' heel of the system due to the combination of an overly complex mechanical design and a very low-cost internal flex circuit system. Another major flaw of the controllers was that the design did not translate into a linear acceleration from the center through the arc of the stick travel. The controllers did, however, include a pause button, a unique feature at the time. Various third-party replacement joysticks were also released, including those made by Wico. Atari Inc. released the Pro-Line Trak-Ball controller for the system, which was used primarily for gaming titles such as Centipede and Missile Command. A paddle controller and an updated self-centering version of the original controller were also in development, but never made it to market. Games were shipped with plastic card overlays that snapped in over the keypad. The card would indicate which game functions, such as changing the view or vehicle speed, were assigned to each key. The primary controller was ranked the 10th worst video game controller by IGN editor Craig Harris. An editor for Next Generation said that their non-centering joysticks "rendered many games nearly unplayable". Internal differences from 8-bit computers David H. Ahl in 1983 described the Atari 5200 as "a 400 computer in disguise". Its internal design is a tweaked version of the Atari 8-bit family using the ANTIC, POKEY, and GTIA coprocessors. Software designed for one does not run on the other, but source code can be mechanically converted unless it uses computer-specific features. Antic magazine reported in 1984 that "the similarities grossly outweigh the differences, so that a 5200 program can be developed and almost entirely debugged [on an Atari 8-bit computer] before testing on a 5200". John J. Anderson of Creative Computing alluded to the incompatibility being intentional, caused by Atari's console division removing 8-bit compatibility to not lose control to the rival computer division. Besides the 5200's lack of a keyboard, the differences are: The Atari computer 10 KB operating system is replaced with a simpler 2 KB version, of which 1 KB is the built-in character set. Some hardware registers, such as those of the GTIA and POKEY chips, are at different memory locations. The purpose of some registers is slightly different on the 5200. The 5200's analog joysticks appear as pairs of paddles to the hardware, which requires different input handling from the digital joystick input on the Atari computers In 1987, Atari Corporation released the XE Game System console, which is a repackaged 65XE (from 1985) with a detachable keyboard that can run home computer titles directly, unlike the 5200. Anderson wrote in 1984 that Atari could have released a console compatible with computer software in 1981. Reception The Atari 5200 did not fare well commercially compared to its predecessor, the Atari 2600. While it touted superior graphics to the 2600 and Mattel's Intellivision, the system was initially incompatible with the 2600's expansive library of games, and some market analysts have speculated that this hurt its sales, especially since an Atari 2600 cartridge adapter had been released for the Intellivision II. (A revised two-port model was released in 1983, along with a game adapter that allowed gamers to play all 2600 games.) This lack of new games was due in part to a lack of funding, with Atari continuing to develop most of its games for the saturated 2600 market. Many of the 5200's games appeared simply as updated versions of 2600 titles, which failed to excite consumers. Its pack-in game, Super Breakout, was criticized for not doing enough to demonstrate the system's capabilities. This gave the ColecoVision a significant advantage as its pack-in, Donkey Kong, delivered a more authentic arcade experience than any previous game cartridge. In its list of the top 25 game consoles of all time, IGN claimed that the main reason for the 5200's market failure was the technological superiority of its competitor, while other sources maintain that the two consoles are roughly equivalent in power. The 5200 received much criticism for the "sloppy" design of its non-centering analog controllers. Anderson described the controllers as "absolutely atrocious". David H. Ahl of Creative Computing Video & Arcade Games said in 1983 that the "Atari 5200 is, dare I say it, Atari's answer to Intellivision, Colecovision, and the Astrocade", describing the console as a "true mass market" version of the Atari 8-bit computers despite the software incompatibility. He criticized the joystick's imprecise control but said that "it is at least as good as many other controllers", and wondered why Super Breakout was the pack-in game when it did not use the 5200's improved graphics. Technical specifications CPU: Custom MOS Technology 6502C @ 1.79 MHz (not a 65C02) Graphics chips: ANTIC and GTIA Support hardware: 3 custom VLSI chips Screen resolution: 14 modes: Six text modes (8×8, 4×8, and 8×10 character matrices supported), Eight graphics modes including 80 pixels per line (16 color), 160 pixels per line (4 color), 320 pixels per line (2 color), variable height and width up to overscan 384×240 pixels Color palette: 128 (16 hues, 8 luma) or 256 (16 hues, 16 luma) Colors on screen: 2 (320 pixels per line) to 16 (80 pixels per line). Up to 23 colors per line with player/missile and playfield priority control mixing. Register values can be changed at every scanline using ANTIC display list interrupts, allowing up to 256 (16 hues, 16 luma) to be displayed at once, with up to 16 per scanline. Sprites: Four 8-pixel-wide sprites, four 2-pixel-wide sprites; height of each is either 128 or 256 pixels; 1 color per sprite Scrolling: Coarse and fine scrolling horizontally and vertically. (Horizontal coarse scroll 4, 8, or 16-pixel/color clock increments, and vertically by mode line height 2, 4, 8, or 16 scan lines.) (Or horizontal fine scroll 0 to 3, 7, or 15 single-pixel/color clock increments and then a 4, 8, or 16-pixel/color clock increment coarse scroll; and vertical fine scroll 0 to 1, 3, 7, or 15 scan line increments and then a 2, 4, 8, or 16 scan line increment coarse scroll) Sound: 4-channel PSG sound via POKEY sound chip, which also handles keyboard scanning, serial I/O, high resolution interrupt capable timers (single cycle accurate), and random number generation. RAM: 16 KB ROM: 2 KB on-board BIOS for system startup and interrupt routing. 32 KB ROM window for standard game cartridges, expandable using bank switching techniques. Dimensions: 13" × 15" × 4.25" Popular culture Critical to the plot of the 1984 film Cloak & Dagger is an Atari 5200 game cartridge called Cloak & Dagger. The arcade version appears in the movie; in actuality the Atari 5200 version was started but never completed. The game was under development with the title Agent X when the movie producers and Atari learned of each other's projects and decided to cooperate. This collaboration was part of a larger phenomenon, of films featuring video games as critical plot elements (as with Tron and The Last Starfighter) and of video game tie-ins to the same films (as with the Tron games for the Intellivision and other platforms). Games See also List of Atari 5200 emulators Video game crash of 1983 References External links AtariAge – Comprehensive Atari 5200 database and information Atari Museum 5200 Super System section 5200 Home video game consoles Second-generation video game consoles Products introduced in 1982 65xx-based video game consoles Discontinued video game consoles
2781
https://en.wikipedia.org/wiki/Atari%207800
Atari 7800
The Atari 7800 ProSystem, or simply the Atari 7800, is a home video game console officially released by Atari Corporation in 1986 as the successor to both the Atari 2600 and Atari 5200. It can run almost all Atari 2600 cartridges, making it one of the first consoles with backward compatibility. It shipped with a different model of joystick from the 2600-standard CX40 and Pole Position II as the pack-in game. Most of the announced titles at launch were ports of 1981–1983 arcade video games. Designed by General Computer Corporation, the 7800 has significantly improved graphics hardware over Atari's previous consoles, but the same Television Interface Adaptor chip that launched with the 2600 in 1977 is used to generate audio. In an effort to prevent the flood of poor quality games that contributed to the video game crash of 1983, cartridges had to be digitally signed by Atari. The Atari 7800 was first announced by Atari, Inc. on May 21, 1984, but a general release was shelved until May 1986 due to the sale of the company. Atari Corporation dropped support for the 7800, along with the 2600 and the Atari 8-bit family, on January 1, 1992. History Atari had been facing pressure from Coleco and its ColecoVision console, which supported graphics that more closely mirrored arcade games of the time than either the Atari 2600 or 5200. The Atari 5200 (released as a successor to the Atari 2600) was criticized for not being able to play 2600 games without an adapter. The Atari 7800 ProSystem was the first console from Atari, Inc. designed by an outside company, General Computer Corporation. It was designed in 1983–84 with an intended mass market rollout in June 1984, but was canceled after the sale of the company to Tramel Technology Ltd on July 2, 1984. The project was originally called the Atari 3600. With a background in creating arcade games such as Food Fight, GCC designed the new system with a graphics architecture similar to arcade machines of the time. The CPU is a slightly customized 6502 processor, the Atari SALLY, running at 1.79 MHz. By some measures the 7800 is more powerful, and by others less, than the 1983 Nintendo Entertainment System. It uses the 2600's Television Interface Adaptor chip, with the same restrictions, for generating two-channels of audio. Launch The 7800 was announced on May 21, 1984. Thirteen games were announced for the system's launch: Ms. Pac-Man, Pole Position II, Centipede, Joust, Dig Dug, Nile Flyer (eventually released as Desert Falcon), Robotron: 2084, Galaga, Food Fight, Ballblazer, Rescue on Fractalus! (later canceled), Track & Field, and Xevious. On July 2, 1984, Warner Communications sold Atari's Consumer Division to Jack Tramiel. All projects were halted during an initial evaluation period. GCC had not been paid for their development of the 7800, and Warner and Tramiel fought over who was accountable. In May 1985, Tramiel relented and paid GCC. This led to additional negotiations regarding the launch titles GCC had developed, then an effort to find someone to lead their new video game division, which was completed in November 1985. The original production run of the Atari 7800 languished in warehouses until it was introduced in January 1986. The console was released nationwide in May 1986 for $79.95. It launched with titles intended for the 7800's debut in 1984 and was aided by a marketing campaign with a budget in the "low millions" according to Atari Corporation officials. This was substantially less than the $9 million spent by Sega and the $16 million spent by Nintendo. The keyboard and high score cartridge planned by Warner were cancelled. In February 1987, Computer Entertainer reported that 100,000 Atari 7800 consoles had been sold in the United States, including those which had been warehoused since 1984. This was less than the Master System's 125,000 and the NES's 1.1 million. A complaint from owners in 1986 was the slow release of games. Galaga in August was followed by Xevious in November. By the end of 1986, the 7800 had 10 games, compared to Sega's 20 and Nintendo's 36. Atari would sell over 1 million 7800 consoles by June 1988. Discontinuation On January 1, 1992, Atari Corporation announced the end of production and support for the 7800, 2600, and the 8-bit computer family including the Atari XEGS. At least one game, an unreleased port of Toki, was worked on past this date. By the time of the discontinuation, the Nintendo Entertainment System controlled 80% of the North American market while Atari had 12%. In Europe, last stocks of the 7800 were sold until summer/fall of 1995. Retro Gamer magazine issue 132 reported that according to Atari UK Marketing Manager Darryl Still, "it was very well stocked by European retail; although it never got the consumer traction that the 2600 did, I remember we used to sell a lot of units through mail order catalogues and in the less affluent areas". Technical specifications CPU: Atari SALLY (custom variant of the 6502) 1.79 MHz, which drops to 1.19 MHz when the Television Interface Adaptor or (6532 RAM-I/O-Timer) chips are accessed Unlike a standard 6502, SALLY can be halted in a known state with a single pin to let other devices control the bus. Sometimes referred to by Atari as "6502C", but not the same as the official MOS Technology 6502C. RAM: 4 KB (2 6116 2Kx8 RAM ICs) ROM: built in 4 KB BIOS ROM, 48 KB Cartridge ROM space without bank switching Graphics: MARIA custom chip Resolution: 160×240 (160×288 PAL) or 320×240 (320×288 PAL) Color palette: 256 (16 hues * 16 luma), different graphics modes restricted the number of usable colors and the number of colors per sprite Direct Memory Access (DMA) Graphics clock: 7.15 MHz Line buffer: 200 bytes (double buffering), 160 sprite pixels per scanline, up to 30 sprites per scanline (without background), up to 100 sprites on screen Sprite/zone sizes: 4 to 160 width, height of 4, 8 or 16 pixels Colors per sprite: 1 to 12 (1 to 8 visible colors, 1 to 4 transparency bits) I/O: Joystick and console switch IO handled by 6532 RIOT and TIA Ports 2 joystick ports cartridge port expansion connector power in RF output Sound: TIA as used in the 2600 for video and sound. In 7800 mode it is only used for sound. At least two games include a POKEY sound chip for improved audio. Graphics Graphics are generated by the custom MARIA chip, which uses an approach common in contemporary arcade system boards and is different from other second and third generation consoles. Instead of a limited number of hardware sprites, MARIA treats everything as a sprite described in a series of display lists. Each display list contains pointers to graphics data and color and positioning information. MARIA supports a palette of 256 colors and graphics modes which are either 160 pixels wide or 320 pixels wide. While the 320 pixel modes theoretically enable the 7800 to create games at higher resolution than the 256 pixel wide graphics found in the Nintendo Entertainment System and Master System, the processing demands of MARIA result in most games using the 160 pixel mode. Each sprite can have from 1 to 12 colors, with 3 colors plus transparency being the most common. In this format, the sprite references one of 8 palettes, where each palette holds 3 colors. The background–visible when not covered by other objects–can also be assigned a color. In total, 25 colors can appear on a scan line. The graphics resolution, color palettes, and background color can be adjusted between scan lines. This can be used to render high resolution text in one area of the screen, while displaying more colorful graphics at lower resolution in the gameplay area. Sound The 7800 uses the TIA chip for two channel audio, the same chip used in the 1977 Atari VCS, and the sound is of the same quality as that system. To compensate, GCC's engineers allowed games to include a POKEY audio chip in the cartridge. Only Ballblazer and Commando do this. GCC planned to make a low-cost, high performance sound chip, GUMBY, which could also be placed in 7800 cartridges to enhance its sound capabilities further. This project was cancelled when Atari was sold to Jack Tramiel. Digitally signed cartridges Following the large number of low quality, third party games for the Atari 2600, Atari required that cartridges for the 7800 be digitally signed. When a cartridge is inserted into the system, the BIOS generates a signature of the cartridge ROM and compares it to the one stored on the cartridge. If they match, the console operates in 7800 mode, granting the game access to MARIA and other features, otherwise the console operates as a 2600. This digital signature code is not present in PAL 7800s, which use various heuristics to detect 2600 cartridges, due to export restrictions. Backward compatibility The 7800's compatibility with the Atari 2600 is made possible by including many of the same chips used in the 2600. When playing an Atari 2600 game, the 7800 uses a Television Interface Adaptor chip to generate graphics and sound. The processor is slowed to 1.19 MHz, to mirror the performance of the 2600's 6507 chip. RAM is limited to 128 bytes and cartridge data is accessed in 4K blocks. When in 7800 mode (signified by the appearance of the full-screen Atari logo), the graphics are generated entirely by the MARIA graphics processing unit. All system RAM is available and cartridge data is accessed in larger 48K blocks. The system's SALLY 6502 runs at its normal 1.79 MHz. The 2600 chips are used to generate sound and to provide the interfaces to the controllers and console switches. System revisions Initial version: two joystick ports on lower front panel. Side expansion port for upgrades and add-ons. Bundled with two CX24 Pro-Line joysticks, AC adapter, switchbox, RCA connecting cable, and Pole Position II cartridge. Second revision: Slightly revised motherboard. Expansion port connector removed from motherboard but is still etched. Shell has indentation of where expansion port was to be. Third revision: Same as above but with only a small blemish on the shell where the expansion port was. Peripherals The Atari 7800 came bundled with the Atari Pro-Line Joystick, a two-button controller with a joystick for movement. The Pro-Line was developed for the 2600 and advertised in 1983, but delayed until Atari proceeded with the 7800. The right fire button only works as a separate fire button for certain 7800 games; otherwise, it duplicates the left fire button, allowing either button to be used for 2600 games. While physically compatible, the 7800's controllers do not work with the Sega Master System, and Sega's controllers are unable to use the 7800's two-button mode. In response to criticism over ergonomic issues with the Pro-Line controllers, Atari later released a joypad controller with the European 7800. Similar in style to controllers found on Nintendo and Sega systems, it was not available in the United States. The Atari XG-1 light gun, bundled with the Atari XEGS and also sold separately, is compatible with the 7800. Atari released five 7800 light gun games: Alien Brigade, Barnyard Blaster, Crossbow, Meltdown, and Sentinel. Cancelled peripherals After the acquisition of the Atari Consumer Division by Jack Tramiel in 1984, several expansion options for the system were cancelled: The High Score Cartridge was designed to save high scores for up to 65 separate games. The cartridge was intended as a pass-through device, similar to the later Game Genie. Nine games were programmed to support the cartridge. The expansion port, to allow for the addition of a planned computer keyboard and connection to laserdisc players and other peripherals, was removed in the second and third revisions of the 7800. A dual joystick holder was designed for Robotron: 2084 and future games like Battlezone, but not produced. Games While the system can play the over 400 games for the Atari 2600, there were only 59 official releases for the 7800. The lineup emphasized high-quality versions of games from the golden age of arcade video games. Pole Position II, Dig Dug, and Galaga, by the time of the 1986 launch, were three, four, and five years old, respectively. A raster graphics version of 1979's Asteroids was released in 1987. In 1988, Atari published a conversion of Nintendo's Donkey Kong, seven years after the original arcade game and five years after the Atari 8-bit family cartridge. Atari also marketed a line of games called "Super Games" which were arcade and computer games previously not playable on a home console such as One-On-One Basketball and Impossible Mission. Eleven games were developed and sold by three third-party companies under their own labels (Absolute Entertainment, Activision, and Froggo) with the rest published by Atari Corporation. Most of the games from Atari were developed by outside companies under contract. Some NES games were developed by companies who had licensed their title from a different arcade manufacturer. While the creator of the NES version would be restricted from making a competitive version of an NES game, the original arcade copyright holder was not precluded from licensing out rights for a home version of an arcade game to multiple systems. Through this loophole, Atari 7800 conversions of Mario Bros., Double Dragon, Commando, Rampage, Xenophobe, Ikari Warriors, and Kung-Fu Master were licensed and developed. A final batch of games was released by Atari in 1990: Alien Brigade, Basketbrawl, Fatal Run, Meltdown, Midnight Mutants, MotorPsycho, Ninja Golf, Planet Smashers, and Scrapyard Dog. Scrapyard Dog was later released for the Atari Lynx. Legacy Atari Flashback In 2004, the Infogrames-owned version of Atari released the Atari Flashback console. It resembles a miniature Atari 7800 and has five 7800 and fifteen 2600 games built-in. Built using the NES-On-A-Chip hardware instead of recreating the Atari 7800 hardware, it was criticized for failing to properly replicate the actual gaming experience. A subsequent 7800 project was cancelled after prototypes were made. Game development The digital signature long prevented aftermarket games from being developed. The signing software was eventually found and released at Classic Gaming Expo in 2001. Several new Atari 7800 games such as Beef Drop, B*nQ, Combat 1990, CrazyBrix, Failsafe, and Santa Simon have been released.. Source code The source code for 13 games, the operating system, and the development tools which run on the Atari ST were discovered in a dumpster behind the Atari building in Sunnyvale, California. Commented assembly language source code was made available for Centipede, Commando, Crossbow, Desert Falcon, Dig Dug, Food Fight, Galaga, Hat Trick, Joust, Ms. Pac-Man, Super Stunt Cycle, Robotron: 2084, and Xevious. See also History of Atari List of Atari 7800 games List of Atari 2600 games References External links AtariAge – Comprehensive Atari 7800 database and information Atari 7800 Information & Resources Atari Museum – History of the Atari 7800 ProSystem Atari 7800 Development Wiki ProSystem emulator for Microsoft Windows 7800 Home video game consoles Backward-compatible video game consoles Third-generation video game consoles 1986 in video gaming Computer-related introductions in 1986 Products introduced in 1986 Products and services discontinued in 1992 1980s toys 65xx-based video game consoles Discontinued video game consoles
2782
https://en.wikipedia.org/wiki/Atari%20Jaguar
Atari Jaguar
The Atari Jaguar is a home video game console developed by Atari Corporation and released in North America in November 1993. Part of the fifth generation of video game consoles, it competed with the 16-bit Sega Genesis, the Super NES and the 32-bit 3DO Interactive Multiplayer that launched the same year. Powered by two custom 32-bit Tom and in addition to a Motorola 68000, Atari marketed it as the world's first 64-bit game system, emphasizing its 64-bit bus used by the blitter. The Jaguar launched with Cybermorph as the pack-in game, which received divisive reviews. The system's library ultimately comprised only 50 licensed games. Development of the Atari Jaguar started in the early 1990s by Flare Technology, which focused on the system after cancellation of the Atari Panther console. The Jaguar was an important system for Atari after the company shifted its focus from computers - having ceased development of its Atari ST - back to consoles. However, the multi-chip architecture, hardware bugs, and poor tools made writing games for the Jaguar difficult. Underwhelming sales further eroded the console's third-party support. Atari attempted to extend the lifespan of the system with the Atari Jaguar CD add-on, with an additional 13 games, and emphasizing the Jaguar's price of over less than its competitors. With the release of the Sega Saturn and PlayStation in 1995, sales of the Jaguar continued to fall. It sold no more than 150,000 units before it was discontinued in 1996. The commercial failure of the Jaguar prompted Atari to leave the console market. After Hasbro Interactive acquired all Atari Corporation properties, the patents of the Jaguar were released into the public domain, with the console declared an open platform. Since its discontinuation, hobbyists have produced games for the system. History Development Atari Corporation's previous home video game console, the Atari 7800, was released in 1986. While it sold 3.77 million units in the U.S. in the period to 1990, it was considered an 'also-ran' and far behind rival Nintendo. Around 1989 work began on a new console leveraging technology from their Atari ST computers. Originally named the Super XE - following the Atari XE Game System - it eventually became the Atari Panther using either 16 or 32-bit architecture. A more advanced system codenamed Jaguar also began work. Both the Jaguar and Panther were developed by the members of Flare Technology, a company formed by Martin Brennan and John Mathieson. The team had claimed that they could not only make a console superior to the Genesis or the Super NES, but they could also be cost-effective. Impressed by their work on the Konix Multisystem, Atari persuaded them to close Flare and form a new company called Flare II, with Atari providing the funding. Work on the Jaguar design progressed faster than expected, so Atari canceled the Panther project in 1991 to focus on the more promising Jaguar, and rumors were already circulating of a 1992 launch and its 32-bit or even 64-bit architecture. By this time the Atari ST had long been surpassed in popularity by the Amiga, while both Atari and Commodore became victims of 'Wintel', which would become the dominant computer platform. Support for Atari's legacy 8-bit products were dropped to fully focus on developing the Jaguar console, while their line of ST computers were dropped during the Jaguar's release in 1993. The Atari Jaguar was unveiled in at the Summer Consumer Electronics Show in June 1993, calling it a "multi-media entertainment system". Launch The Jaguar was launched on November 23, 1993, at a price of $249.99, under a $500 million manufacturing deal with IBM. The system was initially available only in the test markets of New York City and San Francisco, with the slogan "Get bit by Jaguar", claiming superiority over competing 16-bit and 32-bit systems. During this test launch Atari sold all units hoping it would rally support for the system. A nationwide release followed six months later, in early 1994. The Jaguar struggled to attain a substantial user base. Atari reported that it had shipped 17,000 units as part of the system's initial test market in 1993. By the end of 1994, it reported that it had sold approximately 100,000 units. Computer Gaming World wrote in January 1994 that the Jaguar was "a great machine in search of a developer/customer base", as Atari had to "overcome the stigma of its name (lack of marketing and customer support, as well as poor developer relations in the past)". Atari had "ventured late into third party software support" for the Jaguar while competing console 3DO's "18 month public relations blitz" would result in "an avalanche of software support", the magazine reported. The small size and poor quality of the Jaguar's game library became the most commonly cited reason for the Jaguar's tepid adoption, as early releases like Trevor McFur in the Crescent Galaxy, Raiden, and Evolution: Dino Dudes also received poor reviews, the latter two for failing to take full advantage of the Jaguar's hardware. Jaguar did eventually earn praise with games such as Tempest 2000, Doom, and Wolfenstein 3D. The most successful title during the Jaguar's first year was Alien vs. Predator. However, these occasional successes were seen as insufficient while the Jaguar's competitors were receiving a continual stream of critically acclaimed software; GamePro concluded their rave review of Alien vs. Predator by remarking "If Atari can turn out a dozen more games like AvP, Jaguar owners could truly rest easy and enjoy their purchase." Next Generation commented that "thus far, Atari has spectacularly failed to deliver on the software side, leaving many to question the actual quality and capability of the hardware. With only one or two exceptions – Tempest 2000 is cited most frequently – there have just been no truly great games for the Jaguar up to now." They further noted that while Atari is well known by older gamers, the company had much less overall brand recognition than Sega, Sony, Nintendo, or even The 3DO Company. However, they argued that with its low price point, the Jaguar might still compete if Atari could improve the software situation. Bit count controversy Atari tried to downplay competing consoles by proclaiming the Jaguar was the only "64-bit" system; in its marketing in the American market the company used the tagline do the math!, in reference to the 64 number. This claim is questioned by some, because the Motorola 68000 CPU and the Tom and Jerry coprocessors execute 32-bit instruction sets. Atari's reasoning that the 32-bit Tom and Jerry chips work in tandem to add up to a 64-bit system was ridiculed in a mini-editorial by Electronic Gaming Monthly, which commented that "If Sega did the math for the Sega Saturn the way Atari did the math for their 64-bit Jaguar system, the Sega Saturn would be a 112-bit monster of a machine." Next Generation, while giving a mostly negative review of the Jaguar, maintained that it is a true 64-bit system, since the data path from the DRAM to the CPU and Tom and Jerry chips is 64 bits wide. Arrival of Saturn and PlayStation In early 1995, Atari announced that they had dropped the price of the Jaguar to $149.99, in order to be more competitive. Atari ran infomercials with enthusiastic salesmen touting the game system. These aired for most of 1995, but did not sell the remaining stock of Jaguar systems. In a 1995 interview with Next Generation, then-CEO Sam Tramiel declared that the Jaguar was as powerful, if not more powerful, than the newly launched Sega Saturn, and slightly weaker than the upcoming PlayStation. Next Generation received a deluge of letters in response to Tramiel's comments, particularly his threat to bring Sony to court for price dumping if the PlayStation entered the U.S. market at a retail price below $300. Many readers found this threat hollow and hypocritical, since Tramiel noted in the same interview that Atari was selling the Jaguar at a loss. The editor responded that price dumping does not have to do with a product being priced below cost, but its being priced much lower in one country than anotherwhich, as Tramiel said, is illegal. Tramiel and Next Generation agreed that the PlayStation's Japanese price converts to approximately $500. His remark, that the small number of third party Jaguar games was good for Atari's profitability, angered Jaguar owners who were already frustrated at how few games were coming out for the system. In Atari's 1995 annual report, it noted: In addition, Atari had severely limited financial resources, and so could not create the level of marketing which has historically backed successful gaming consoles. Decline By November 1995, mass layoffs and insider statements were fueling journalistic speculation that Atari had ceased both development and manufacturing for the Jaguar and was simply trying to sell off existing stock before exiting the video game industry. Although Atari continued to deny these theories going into 1996, core Jaguar developers such as High Voltage Software and Beyond Games stated that they were no longer receiving communications from Atari regarding future Jaguar projects. In its 10-K405 SEC Filing, filed April 12, 1996, Atari informed stockholders that its revenues had declined by more than half, from $38.7 million in 1994 to $14.6 million in 1995, then gave them the news on the truly dire nature of the Jaguar: The filing confirmed that Atari had abandoned the Jaguar in November 1995 and in the subsequent months were concerned chiefly with liquidating its inventory of Jaguar products. On April 8, 1996, Atari Corporation agreed to merge with JTS, Inc. in a reverse takeover, thus forming JTS Corporation. The merger was finalized on July 30. After the merger, the bulk of Jaguar inventory remained unsold and would be finally moved out to Tiger Software, a private liquidator, on December 23, 1996. On March 13, 1998, JTS sold the Atari name and all of the Atari properties to Hasbro Interactive. Technical specifications From the Jaguar Software Reference manual, page 1: Design specs for the console allude to the GPU or DSP being capable of acting as a CPU, leaving the Motorola 68000 to read controller inputs. Atari's Leonard Tramiel also specifically suggested that the 68000 not be used by developers. In practice, however, many developers use the Motorola 68000 to drive gameplay logic due to the greater developer familiarity of the 68000 and the adequacy of the 68000 for certain types of games. Most critically, a flaw in the memory controller means that certain obscure conventions must be followed for the RISC chips to be able to execute code from RAM. The system was notoriously difficult to program for, not only because of its two-processor design but development tools were released in an unfinished state and the hardware had crippling bugs. Processors Tom chip, 26.59 MHz Graphics processing unit (GPU) – 32-bit RISC architecture, 4 KB internal RAM, all graphical effects are software-based, with additional instructions intended for 3D operations Object Processor – 64-bit fixed-function video processor, converts display lists to video output at scan time. Blitter – 64-bit high speed logic operations, z-buffering and Gouraud shading, with 64-bit internal registers. DRAM controller, 8-, 16-, 32- and 64-bit memory management Jerry chip, 26.59 MHz Digital Signal Processor – 32-bit RISC architecture, 8 KB internal RAM Similar RISC core as the GPU, additional instructions intended for audio operations CD-quality sound (16-bit stereo) Number of sound channels limited by software Two DACs (stereo) convert digital data to analog sound signals Full stereo capabilities Wavetable synthesis and AM synthesis A clock control block, incorporating timers, and a UART Joystick control Motorola 68000 - system processor "used as a manager". General purpose 16-/32-bit control processor, 13.295 MHz Other features RAM: 2 MB on a 64-bit bus using 4 16-bit fast-page-mode DRAMs (80 ns) Storage: ROM cartridges – up to 6 MB DSP-port (JagLink) Monitor-port (composite/S-Video/RGB) Antenna-port (UHF/VHF) - fixed at 591 MHz in Europe; not present on French model Support for ComLynx I/O NTSC/PAL machines can be identified by their power LED colour, Red: NTSC; Green: PAL. COJAG arcade games Atari Games licensed the Atari Jaguar's chipset for use in its arcade games. The system, named COJAG (for "Coin-Op Jaguar"), replaced the 68000 with a 68020 or MIPS R3000-based CPU (depending on the board version), added more RAM, a full 64-bit wide ROM bus (Jaguar ROM bus being 32-bit), and optionally a hard drive (some games such as Freeze are ROM only). It runs the lightgun games Area 51 and Maximum Force, which were released by Atari as dedicated cabinets or as the Area 51 and Maximum Force combo machine. Other games were developed but never released: 3 On 3 Basketball, Fishin' Frenzy, Freeze, and Vicious Circle. Peripherals Prior to the launch of the console in November 1993, Atari had announced a variety of peripherals to be released over the console's lifespan. This included a CD-ROM-based console, a dial-up Internet link with support for online gaming, a virtual reality headset, and an MPEG-2 video card. However, due to the poor sales and eventual commercial failure of the Jaguar, most of the peripherals in development were canceled. The only peripherals and add-ons released by Atari for the Jaguar are a redesigned controller, an adapter for four players, a CD console add-on, and a link cable for local area network (LAN) gaming. The redesigned second controller, the ProController by Atari, added three more face buttons and two triggers. It was created in response to the criticism of the original controller, said to lack enough buttons for fighting games in particular. Sold independently, however, it was never bundled with the system. The Team Tap multitap adds 4-controller support, compatible only with the optionally bundled White Men Can't Jump and NBA Jam Tournament Edition. Eight player gameplay with two Team Taps is possible but unsupported by those games. For LAN multiplayer support, the Jaglink Interface links two Jaguar consoles through a modular extension and a UTP phone cable. It is compatible with three games: AirCars, BattleSphere, and Doom. In 1994 at the CES, Atari announced that it had partnered with Phylon, Inc. to create the Jaguar Voice/Data Communicator. The unit was delayed and an estimated 100 units were produced, but eventually in 1995 was canceled. The Jaguar Voice Modem or JVM utilizes a 19.9 kbit/s dial up modem to answer incoming phone calls and store up to 18 phone numbers. Players directly dial each other for online play, only compatible with Ultra Vortek which initializes the modem by entering 911 on the key pad at startup. Jaguar CD The Jaguar CD is a CD-ROM peripheral for games. It was released in September 1995, two years after the Jaguar's launch. Thirteen CD games were released during its manufacturing lifetime, with more being made later by homebrew developers. Each Jaguar CD unit has a Virtual Light Machine, which displays light patterns corresponding to music, if the user inserts an audio CD into the console. It was developed by Jeff Minter, after experimenting with graphics during the development of Tempest 2000. The program was deemed a spiritual successor to the Atari Video Music, a visualizer released in 1976. The Memory Track is a cartridge accessory for the Jaguar CD, providing Jaguar CD games with 128 K EEPROM for persistent storage of data such as preferences and saved games. The Atari Jaguar Duo (codenamed Jaguar III) was a proposal to integrate the Jaguar CD to make a new console, a concept similar to the TurboDuo and Genesis CDX. A prototype, described by journalists as resembling a bathroom scale, was unveiled at the 1995 Winter Consumer Electronics Show, but the console was canceled before production. Jaguar VR A virtual reality headset compatible with the console, tentatively titled the Jaguar VR, was unveiled by Atari at the 1995 Winter Consumer Electronics Show. The development of the peripheral was a response to Nintendo's virtual reality console, the Virtual Boy, which had been announced the previous year. The headset was developed in cooperation with Virtuality, which had previously created many virtual reality arcade systems, and was already developing a similar headset for practical purposes, named Project Elysium, for IBM. The peripheral was targeted for a commercial release before Christmas 1995. However, the deal with Virtuality was abandoned in October 1995. After Atari's merger with JTS in 1996, all prototypes of the headset were allegedly destroyed. However, two working units, one low-resolution prototype with red and grey-colored graphics and one high-resolution prototype with blue and grey-colored graphics, have since been recovered, and are regularly showcased at retrogaming-themed conventions and festivals. Only one game was developed for the Jaguar VR prototype: a 3D-rendered version of the 1980 arcade game Missile Command, titled Missile Command 3D, and a demo of Virtuality's Zone Hunter was created. Unlicensed peripherals An unofficial expansion peripheral for the Atari Jaguar dubbed the "Catbox" was released by the Rockford, Illinois company ICD. It was originally slated to be released early in the Jaguar's life, in the second quarter of 1994, but was not actually released until mid-1995. The ICD CatBox plugs directly into the AV/DSP connectors located in the rear of the Jaguar console and provides three main functions. These are audio, video, and communications. It features six output formats, three for audio (line level stereo, RGB monitor, headphone jack with volume control) and three for video (composite, S-Video, and RGB analog component video) making the Jaguar compatible with multiple high quality monitor systems and multiple monitors at the same time. It is capable of communications methods known as CatNet and RS-232 as well as DSP pass through, allowing the user to connect two or more Jaguars together for multiplayer games either directly or with modems. The ICD CatBox features a polished stainless steel casing and red LEDs in the jaguar's eyes on the logo that indicate communications activity. An IBM AT-type null modem cable may be used to connect two Jaguars together. The CatBox is also compatible with Atari's Jaglink Interface peripheral. An adaptor for the Jaguar that allows for WebTV access was revealed in 1998; one prototype is known to exist. Game library Reception Reviewing the Jaguar just a few weeks prior to its launch, GamePro gave it a "thumbs sideways". They praised the power of the hardware but criticized the controller, and were dubious of how the software lineup would turn out, commenting that Atari's failure to secure support from key third party publishers such as Capcom was a bad sign. They concluded that "Like the 3DO, the Jaguar is a risky investment – just not quite as expensive." The Jaguar won GameFan'''s "Best New System" award for 1993. The small size and poor quality of the Jaguar's game library became the most commonly cited reason for its failure in the marketplace. The pack-in game Cybermorph was one of the first polygon-based games for consoles, but was criticized for design flaws and a weak color palette, and compared unfavorably with the SNES's Star Fox. Other early releases like Trevor McFur in the Crescent Galaxy, Raiden, and Evolution: Dino Dudes also received poor reviews, the latter two for failing to take full advantage of the Jaguar's hardware. Jaguar did eventually earn praise with games such as Tempest 2000, Doom, and Wolfenstein 3D. The most successful title during the Jaguar's first year was Alien vs. Predator. However, these occasional successes were seen as insufficient while the Jaguar's competitors were receiving a continual stream of critically acclaimed software; GamePro concluded their rave review of Alien vs. Predator by remarking "If Atari can turn out a dozen more games like AvP, Jaguar owners could truly rest easy and enjoy their purchase." In late 1995 reviews of the Jaguar, Game Players remarked, "The Jaguar suffers from several problems, most importantly the lack of good software." and Next Generation likewise commented that "thus far, Atari has spectacularly failed to deliver on the software side, leaving many to question the actual quality and capability of the hardware. With only one or two exceptions – Tempest 2000 is cited most frequently – there have just been no truly great games for the Jaguar up to now." They further noted that while Atari is well known by older gamers, the company had much less overall brand recognition than Sega, Sony, Nintendo, or even The 3DO Company. However, they argued that with its low price point, the Jaguar might still compete if Atari could improve the software situation. They gave the system two out of five stars. Game Players also stated the despite being 64-bit, the Jaguar is much less powerful than the 3DO, Saturn, and PlayStation, even when supplemented with the Jaguar CD. With such a small library of games to challenge the incumbent 16-bit game consoles, Jaguar's appeal never grew beyond a small gaming audience. Digital Spy commented: "Like many failed hardware ventures, it still maintains something of a cult following but can only be considered a misstep for Atari." In 2006 IGN editor Craig Harris rated the original Jaguar controller as the worst game controller ever, criticizing the unwarranted recycling of the 1980s "phone keypad" format and the small number of action buttons, which he found particularly unwise given that Atari was actively trying to court fighting game fans to the system. Ed Semrad of Electronic Gaming Monthly commented that many Jaguar games gratuitously used all of the controller's phone keypad buttons, making the controls much more difficult than they needed to be. GamePros The Watch Dog remarked, "The controller usually doesn't use the keypad, and for games that use the keypad extensively (Alien vs. Predator, Doom), a keypad overlay is used to minimize confusion. But yes, it is a lot of buttons for nuttin'." Atari added more action buttons for its Pro Controller, to improve performance in fighting games in particular. Legacy Telegames continued to publish games for the Jaguar after it was discontinued, and for a time was the only company to do so. On May 14, 1999, Hasbro Interactive announced that it had released all patents to the Jaguar, declaring it an open platform; this opened the doors for extensive homebrew development. Following the announcement, Songbird Productions joined Telegames in releasing unfinished Jaguar games alongside new games to satisfy the cult following. Hasbro Interactive, along with all the Atari properties, was sold to Infogrames on January 29, 2001. In the United Kingdom in 2001, Telegames and retailer Game made a deal to bring the Jaguar to Game's retail outlets. It was initially sold for £29.99 new and software ranged between £9.99 for more common games such as Doom and Ruiner Pinball and £39.99 for rarer releases such as Defender 2000 and Checkered Flag. The machine had a presence in the stores until 2007, when remaining consoles were sold off for £9.99 and games were sold for as low as 97p. Molds In 1997, Imagin Systems, a manufacturer of dental imaging equipment, purchased the Jaguar cartridge and console molds, including the molds for the CD add-on, from JTS. The console molds could, with minor modification, fit their HotRod camera, and the cartridge molds were reused to create an optional memory expansion card. In a retrospective, Imagin founder Steve Mortenson praised the design, but admitted that their device came at the time of the dental industry's transition to USB, and apart from a few prototypes, the molds went unused. In December 2014, the molds were purchased from Imagin Systems by Mike Kennedy, owner of the Kickstarter funded Retro Videogame Magazine'', to propose a new crowdfunded video game console, the Retro VGS, later rebranded the Coleco Chameleon after entering a licensing agreement with Coleco. The purchase of the molds was far cheaper than designing and manufacturing entirely new molds, and Kennedy described their acquisition as "the entire reason [the Retro VGS] is possible". However, the project was terminated in March 2016 following criticism of Kennedy and doubts regarding demand for the proposed console. Two "prototypes" were discovered to be fakes and Coleco withdrew from the project. After the project's termination, the molds were sold to Albert Yarusso, the founder of the AtariAge website. See also Contiki, portable operating system, including a port for the Jaguar with GUI, TCP/IP, and web browser support. References External links Atari Jaguar review, 1994 Products introduced in 1993 Products and services discontinued in 1996 Jaguar duo Home video game consoles Fifth-generation video game consoles 1990s toys 68k-based game consoles Discontinued video game consoles Regionless game consoles
2783
https://en.wikipedia.org/wiki/Atari%20Lynx
Atari Lynx
The Atari Lynx is a hybrid 8/16-bit fourth-generation hand-held game console released by Atari Corporation in September 1989 in North America and 1990 in Europe and Japan. It was the first handheld game console with a color liquid-crystal display. Powered by a 16 MHz 65C02 8-bit CPU and a custom 16-bit blitter, the Lynx was more advanced than Nintendo's monochrome Game Boy, released two months earlier. It also competed with Sega's Game Gear and NEC's TurboExpress, released the following year. The system was developed at Epyx by two former designers of the Amiga personal computers. The project was called the Handy Game or simply Handy. In 1991, Atari replaced the Lynx with a smaller model internally referred to as the Lynx II. Atari published a total of 73 games for the Lynx before it was discontinued in 1995. History The Lynx system was originally developed by Epyx as the Handy Game. In 1986, two former Amiga designers, R. J. Mical and Dave Needle, had been asked by a former manager at Amiga, David Morse, to design a portable gaming system. Morse now worked at Epyx, a game software company with a recent string of hit games. Morse's son had asked him if he could make a portable gaming system, prompting a meeting with Mical and Needle to discuss the idea. Morse convinced Mical and Needle and they were hired by Epyx to be a part of the design team. Planning and design of the console began in 1986 and was completed in 1987. Epyx first showed the Handy system at the Winter Consumer Electronics Show (CES) in January 1989. Facing financial difficulties, Epyx sought partners. Nintendo, Sega, and other companies declined, but Atari and Epyx eventually agreed that Atari would handle production and marketing, and Epyx would handle software development. Epyx declared bankruptcy by the end of the year, so Atari essentially owned the entire project. Both Atari and others had to purchase Amigas from Atari arch-rival Commodore in order to develop Lynx software. The Handy was designed to run games from the cartridge format, and the game data must be copied from ROM to RAM before it can be used. Thus, less RAM is then available and each game's initial loading is slow. There are trace remnants of a cassette tape interface physically capable of being programmed to read a tape. Lynx developers have noted that "there is still reference of the tape and some hardware addresses" and an updated vintage Epyx manual describes the bare existence of what could be utilized for tape support. A 2009 retrospective interview with Mical clarifies that there is no truth to some early reports claiming that games were loaded from tape, and elaborates, "We did think about hard disk a little." The networking system was originally developed to run over infrared links and codenamed RedEye. This was changed to a cable-based networking system before the final release as the infrared beam was too easily interrupted when players walked through the beam, according to Peter Engelbrite. Engelbrite developed the first recordable eight-player co-op game, and the only eight-player game for the Lynx, Todd's Adventures in Slime World. Atari changed the internal speaker and removed the thumb stick on the control pad. At Summer 1989 CES, Atari's press demonstration included the "Portable Color Entertainment System", which was changed to "Lynx" when distributed to resellers, initially retailing in the US at . Its launch was successful. Atari reported that it had sold 90% of the 50,000 units shipped in the launch month in the U.S. with a limited launch in New York. US sales in 1990 were approximately 500,000 units according to the Associated Press. In late 1991, it was reported that Atari sales estimates were about 800,000, which Atari claimed was within its expected projections. Lifetime sales by 1995 amount to fewer than 7 million units when combined with the Game Gear. In comparison, 16 million Game Boy units were sold by 1995 because of its ruggedness, half price, much longer battery life, bundling with the smash hit Tetris, and superior game library. As with the console units, the game cartridge design evolved over the first year of the console's release. The first generation of cartridges are flat, and designed to be stackable for ease of storage. However, this design proved to be very difficult to remove from the console and was replaced by a second design. This style, called "tabbed" or "ridged", adds two small tabs on the underside to aid in removal. The original flat style cartridges can be stacked on top of the newer cartridges, but the newer cartridges can not be easily stacked on each other, nor were they stored easily. Thus a third style, the "curved lip" style was produced, and all official and third-party cartridges during the console's lifespan were released (or re-released) using this style. In May 1991, Sega launched its Game Gear portable gaming handheld with a color screen. In comparison to the Lynx it had shorter battery life (3–4 hours as opposed to 4-5 for the Lynx), but it is slightly smaller, has significantly more games, and cost $30 less than the Lynx at launch. Retailers such as Game and Toys "R" Us continued to sell the Lynx well into the mid-1990s on the back of the Atari Jaguar launch, helped by magazines such as Ultimate Future Games which continued to cover the Lynx alongside the new generation of 32-bit and 64-bit consoles. Lynx II In July 1991, Atari introduced a new version of the Lynx, internally called the "Lynx II", with a new marketing campaign, new packaging, slightly improved hardware, better battery life, and a sleeker look. It has rubber hand grips and a clearer backlit color screen with a power save option (which turns off the backlighting). The monaural headphone jack of the original Lynx was replaced with one wired for stereo. The Lynx II was available without any accessories, dropping the price to . Decline In 1993, Atari started shifting its focus away from the Lynx in order to prepare for the launch of the Jaguar; a few games were released during that time, including Battlezone 2000. Support for the Lynx was formally discontinued in 1995. After the respective launches of the Sega Saturn and Sony PlayStation caused the commercial failure of the Jaguar, Atari ceased all game development and hardware manufacturing by early 1996 and would later merge with JTS, Inc. on July 30 of that year. Features The Atari Lynx has a backlit color LCD display, switchable right- and left-handed (upside down) configuration, and the ability to network with other units via Comlynx cable. The maximum stable connection allowed is eight players. Each Lynx needs a copy of the game, and one cable can connect two machines. The cables can be connected into a chain. The Lynx was cited as the "first gaming console with hardware support for zooming and distortion of sprites". With a 4096 color palette and integrated math and graphics co-processors (including a sprite engine unit), its color graphics display was said to be the key defining feature in the system's competition against Nintendo's monochromatic Game Boy. The fast pseudo-3D graphics features were made possible on a minimal hardware system by co-designer Dave Needle having "invented the technique for planar expansion/shrinking capability" and using stretched, textured, triangles instead of full polygons. Technical specifications Mikey (8-bit VLSI custom CMOS chip running at 16 MHz) VLSI 8-bit VL65NC02 processor (based on the MOS 6502) running at up to 4 MHz (3.6 MHz average) Sound engine 4 channel sound 8-bit DAC for each channel (4 channels × 8-bits/channel = 32 bits commonly quoted) these four sound channels can also switch in analogue sound mode to generate PSG sound. Video DMA driver for liquid-crystal display Custom built and designed by Jay Miner and David Morse 160×102 pixels resolution 4,096 color (12-bit) palette 16 simultaneous colors (4 bits) from palette per scanline Variable frame rate (up to 75 frames/second) Eight system timers (two reserved for LCD timing, one for UART) Interrupt controller UART (for Comlynx) (fixed format 8E1, up to 62500 Bd / TurboMode 1,000,000Bd) 512 bytes of bootstrap and game-card loading ROM Suzy (16-bit VLSI custom CMOS chip running at ) Unlimited number of blitter "sprites" with collision detection Hardware sprite scaling, distortion, and tilting effects Hardware decoding of compressed sprite data Hardware clipping and multi-directional scrolling Math engine Hardware 16-bit × 16-bit → 32-bit multiply with optional accumulation; 32-bit ÷ 16-bit → 16-bit divide Parallel processing of CPU RAM: 64 KB 120ns DRAM Cartridges: 128, 256, 512 KB and (with bank-switching) 1 MB Ports: Headphone port ( stereo; wired for mono on the original Lynx) ComLynx (multiple unit communications, serial) LCD Screen: 3.5" diagonal Battery holder (six AA) 4–5 hours (Lynx I) 5–6 hours (Lynx II) Reception Lynx was reviewed in 1990 in Dragon, which gave it 5 out of 5 stars. The review states that the Lynx "throws the Game Boy into the prehistoric age", and praises the built-in object scaling capabilities, the multiplayer feature of the ComLynx cable, and the strong set of launch games. Legacy Telegames released several games in the late 1990s, including a port of Raiden and a platformer called Fat Bobby in 1997, and an action sports game called Hyperdrome in 1999. On March 13, 1998, nearly three years after the Lynx's discontinuation, JTS Corporation sold all of the Atari assets to Hasbro Interactive for $5 million. On May 14, 1999, Hasbro, which held on to those properties until selling Hasbro Interactive to Infogrames in 2001, released into the public domain all rights to the Jaguar, opening up the platform for anyone to publish software on without Hasbro's interference. Internet theories say that the Lynx's rights may have been released to the public at the same time as the Jaguar, but this is clearly disputed. Nevertheless, since discontinuation, the Lynx, like the Jaguar, has continued to receive support from a grassroots community which would go on to produce many successful homebrew games such as T-Tris (the first Lynx game with a save-game feature), Alpine Games, and Zaku. In 2008, Atari was honored at the 59th Annual Technology & Engineering Emmy Awards for pioneering the development of handheld games with the Lynx. See also List of Atari Lynx games History of Atari References External links AtariAge – Comprehensive Lynx Database and information Guide to Atari Lynx games at Retro Video Gamer Too Powerful for Its Own Good, Atari's Lynx Remains a Favorite 25 Years Later Atari Lynx review, 1990 Atari Lynx Hardware Documentation Atari Lynx Development Wiki Computer-related introductions in 1989 Discontinued handheld game consoles Handheld game consoles Fourth-generation video game consoles Lynx 1980s toys 1990s toys 65xx-based video game consoles Public domain in the United States Regionless game consoles
2785
https://en.wikipedia.org/wiki/Annals%20of%20Mathematics
Annals of Mathematics
The Annals of Mathematics is a mathematical journal published every two months by Princeton University and the Institute for Advanced Study. History The journal was established as The Analyst in 1874 and with Joel E. Hendricks as the founding editor-in-chief. It was "intended to afford a medium for the presentation and analysis of any and all questions of interest or importance in pure and applied Mathematics, embracing especially all new and interesting discoveries in theoretical and practical astronomy, mechanical philosophy, and engineering". It was published in Des Moines, Iowa, and was the earliest American mathematics journal to be published continuously for more than a year or two. This incarnation of the journal ceased publication after its tenth year, in 1883, giving as an explanation Hendricks' declining health, but Hendricks made arrangements to have it taken over by new management, and it was continued from March 1884 as the Annals of Mathematics. The new incarnation of the journal was edited by Ormond Stone (University of Virginia). It moved to Harvard in 1899 before reaching its current home in Princeton in 1911. An important period for the journal was 1928–1958 with Solomon Lefschetz as editor. During this time, it became an increasingly well-known and respected journal. Its rise, in turn, stimulated American mathematics. Norman Steenrod characterized Lefschetz' impact as editor as follows: "The importance to American mathematicians of a first-class journal is that it sets high standards for them to aim at. In this somewhat indirect manner, Lefschetz profoundly affected the development of mathematics in the United States." Princeton University continued to publish the Annals on its own until 1933, when the Institute for Advanced Study took joint editorial control. Since 1998 it has been available in an electronic edition, alongside its regular print edition. The electronic edition was available without charge, as an open access journal, but since 2008 this is no longer the case. Issues from before 2003 were transferred to the non-free JSTOR archive, and articles are not freely available until 5 years after publication. Editors The current () editors of the Annals of Mathematics are Helmut Hofer, Nick Katz, Sergiu Klainerman, Fernando Codá Marques, Assaf Naor, Peter Sarnak and Zoltán Szabó (all but Helmut Hofer from Princeton University, with Hofer being a professor at the Institute for Advanced Study and Peter Sarnak also being a professor there as a second affiliation). Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, and Scopus. According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.246, ranking it third out of 330 journals in the category "Mathematics". References External links Mathematics journals Publications established in 1874 English-language journals Bimonthly journals Princeton University publications Academic journals published by universities and colleges of the United States 1874 establishments in Iowa
2787
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology
Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions. Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research. Overview The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin. While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive. In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field. The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars. Theoretical foundations Planetary habitability Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds. Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. Environmental Stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems). Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy. It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change. Methodology Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methodologies within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to; Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food. Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts. Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments. Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including; Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances. Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans. Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth. Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth. Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include; The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life. The study of subsurface oceans on icy moons: Recent discoveries of subsurface oceans on moons such as Europa and Enceladus have opened up new habitability zones thus targets for the search for extraterrestrial life. Currently, missions like the Europa Clipper are being planned to search for biosignatures within these environments. The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus. Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets. SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources. Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include; The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules. The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field. The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions. The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth. The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions. The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth. The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life. The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules. The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence. The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth. The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life. The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms. Research The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells. Research outcomes , no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial. Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists. On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone." Elements of astrobiology Astronomy Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway. The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life. An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise: where: N = The number of communicative civilizations R* = The rate of formation of suitable stars (stars such as the Sun) fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun) ne = The number of Earth-sized worlds per planetary system fl = The fraction of those Earth-sized planets where life actually develops fi = The fraction of life sites where intelligence develops fc = The fraction of communicative planets (those on which electromagnetic communications technology develops) L = The "lifetime" of communicating civilizations However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it. Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth. Biology Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life. Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they comprise an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist. Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Xanthoria elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere. Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist. The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth. The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets. Rare Earth hypothesis The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds. Missions Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System. Viking program The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists. Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Beagle 2 Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna. EXPOSE EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit. Mars Science Laboratory The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars. Tanpopo The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet. ExoMars rover ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission is currently under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it is planned for a 2022 launch. Mars 2020 Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel. Europa Clipper Europa Clipper is a mission planned by NASA for a 2025 launch that will conduct detailed reconnaissance of Jupiter's moon Europa and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites. Dragonfly Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts. Proposed concepts Icebreaker Life Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation. Journey to Enceladus and Titan Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter. Enceladus Life Finder Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon. Life Investigation For Enceladus Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan. Oceanus Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water. Explorer of Enceladus and Titan Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency. See also The Living Cosmos References Bibliography The International Journal of Astrobiology, published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field. Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe. Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt. Further reading D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition). Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology. External links Astrobiology.nasa.gov UK Centre for Astrobiology Spanish Centro de Astrobiología Astrobiology Research at The Library of Congress Astrobiology Survey – An introductory course on astrobiology Summary - Search For Life Beyond Earth (NASA; 25 June 2021) Extraterrestrial life Origin of life Astronomical sub-disciplines Branches of biology Speculative evolution
2790
https://en.wikipedia.org/wiki/Air%20show
Air show
An air show (or airshow, air fair, air tattoo) is a public event where aircraft are exhibited. They often include aerobatics demonstrations, without they are called "static air shows" with aircraft parked on the ground. The largest air show measured by number of exhibitors and size of exhibit space is Le Bourget, followed by Farnborough, with the Dubai Airshow and Singapore Airshow both claiming third place. The largest air show or fly-in by number of participating aircraft is EAA AirVenture Oshkosh, with approximately 10,000 aircraft participating annually. The biggest military airshow in the world is the Royal International Air Tattoo, at RAF Fairford in England. On the other hand, FIDAE in II Air Brigade of the FACH, next to the Arturo Merino Benítez International Airport in Santiago, Chile, is the largest aerospace fair in Latin America and the Southern Hemisphere. Outline Some airshows are held as a business venture or as a trade event where aircraft, avionics and other services are promoted to potential customers. Many air shows are held in support of local, national or military charities. Military air firms often organise air shows at military airfields as a public relations exercise to thank the local community, promote military careers and raise the profile of the military. Air "seasons" vary around the world. The United States enjoys a long season that generally runs from March to November, covering the spring, summer, and fall seasons. Other countries often have much shorter seasons. In Japan air shows are generally events held at Japan Air Self-Defense Force bases regularly throughout the year. The European season usually starts in late April or Early May and is usually over by mid October. The Middle East, Australia, and New Zealand hold their events between January and March. However, for many acts, the "off-season" does not mean a period of inactivity; pilots and performers use this time for maintenance and practice. The type of displays seen at shows are constrained by a number of factors, including the weather and visibility. Most aviation authorities now publish rules and guidance on minimum display heights and criteria for differing conditions. In addition to the weather, pilots and organizers must also consider local airspace restrictions. Most exhibitors will plan "full", "rolling" and "flat" display for varying weather and airspace conditions. The types of shows vary greatly. Some are large scale military events with large flying displays and ground exhibitions while others held at small local airstrips can often feature just one or two hours of flying with just a few stalls on the ground. Air displays can be held during day or night with the latter becoming increasingly popular. Air shows often, but do not always, take place over airfields; some have been held over the grounds of stately homes or castles and over the sea at coastal resorts. The first public international airshow, at which many types of aircraft were displayed and flown, was the Grande Semaine d'Aviation de la Champagne, held Aug. 22–29, 1909 in Reims. This had been preceded by what may have been the first ever gathering of enthusiasts, June 28 – July 19 of the same year at the airfield at La Brayelle, near Douai. Attractions Before World War II, air shows were associated with long-distance air races, often lasting many days and covering thousands of miles. While the Reno Air Races keep this tradition alive, most air shows today primarily feature a series of aerial demos of short duration. Most air shows feature warbirds, aerobatics, and demonstrations of modern military aircraft, and many air shows offer a variety of other aeronautical attractions as well, such as wing-walking, radio-controlled aircraft, water/slurry drops from firefighting aircraft, simulated helicopter rescues and sky diving. Specialist aerobatic aircraft have powerful piston engines, light weight and big control surfaces, making them capable of very high roll rates and accelerations. A skilled pilot will be able to climb vertically, perform very tight turns, tumble his aircraft end-over-end and perform manoeuvres during loops. Larger airshows can be headlined by military jet demonstration teams, such as the United States Navy Blue Angels, United States Air Force Thunderbirds, Royal Canadian Air Force Snowbirds, Royal Air Force Red Arrows, and Swiss Air Force Patrouille Suisse, among many others. Solo military demos, also known as tactical demos, feature one aircraft. The demonstration focuses on the capabilities of modern military aircraft. The display will usually demonstrate the aircraft's very short (and often very loud) rolls, fast speeds, slow approach speeds, as well as their ability to quickly make tight turns, to climb quickly, and their ability to be precisely controlled at a large range of speeds. Manoeuvres include aileron rolls, barrel rolls, hesitation rolls, Cuban-8s, tight turns, high-alpha flight, a high-speed pass, double Immelmans, and touch-and-gos. Tactical demos may include simulated bomb drops, sometimes with pyrotechnics on the ground for effect. Aircraft with special characteristics that give them unique capabilities will often display those in their demos; For example, Russian fighters with thrust vectoring may be used to perform the cobra maneuver or the Kulbit, while VTOL aircraft such as the Harrier may display such vertical capabilities or perform complex maneuvers with them. Some military air shows also feature demonstrations of aircraft ordnance in airstrikes and close air support, using either blanks or live munitions. Safety Air shows may present some risk to spectators and aviators. Accidents have occurred, sometimes with a large loss of life, such as the 1988 Ramstein air show disaster (70 deaths) in Germany and the 2002 Sknyliv air show disaster (77 deaths) in Ukraine. Because of these accidents, the various aviation authorities around the world have set rules and guidance for those running and participating in air displays. For example, after the breakup of an aircraft at 1952 Farnborough air show (31 deaths), the separation between display and spectators was increased. Air displays are often monitored by aviation authorities to ensure safe procedures. In the United Kingdom, local authorities will first need to approve any application for an event to which the public is admitted. No approval, no event. The first priority must be to arrange insurance cover and details can be obtained from your local authority. An added complication is a whole new raft of legislation concerning Health & Safety in particular Corporate Manslaughter, which can involve the event organiser being charged with a criminal offence if any of the insurances and risk assessments are not fully completed well in advance of the event. If this very basic step is not completed then any further activity should be halted until it is. Rules govern the distance from the crowds that aircraft must fly. These vary according to the rating of the pilot/crew, the type of aircraft and the way the aircraft is being flown. For instance, slower, lighter aircraft are usually allowed closer and lower to the crowd than larger, faster types. Also, a fighter jet flying straight and level will be able to do so closer to the crowd and lower than if it were performing a roll or a loop. Pilots can get authorizations for differing types of displays (i.e. limbo flying, basic aerobatics to unlimited aerobatics) and to differing minimum base heights above the ground. To gain such authorizations, the pilots will have to demonstrate to an examiner that they can perform to those limits without endangering themselves, ground crew or spectators. Despite display rules and guidances, accidents have continued to happen. However, air show accidents are rare and where there is proper supervision air shows have impressive safety records. Each year, organizations such as International Council of Air Shows and European Airshow Council meet and discuss various subjects including air show safety where accidents are discussed and lessons learned. See also Fly-in Flypast Barnstorming List of airshow accidents List of air shows Teardrop turn Whifferdill turn Bessie Coleman References Further reading Brett Holman, "The militarisation of aerial theatre: air displays and airmindedness in Britain and Australia between the world wars", Contemporary British History, vol. 33, no. 4 (2019), pp. 483–506. Air Show Accidents: "Reviewing the Notams Before the Show to Avoid Accidents" External links International Council of Air Shows Experimental Aircraft Association Calendar Royal Aero Club Events Flightglobal's Upcoming air shows USAF Thunderbirds Canadian Forces Snowbirds History of transport events
2802
https://en.wikipedia.org/wiki/Akihabara
Akihabara
is a neighborhood in the Chiyoda ward of Tokyo, Japan, generally considered to be the area surrounding Akihabara Station. Administratively, the area named Akihabara is actually found in the and Kanda-Sakumachō districts in Chiyoda. There also exists an administrative district called Akihabara in the Taitō ward further north of Akihabara Station, but it is not the place people generally refer to as Akihabara. The name Akihabara is a shortening of , which ultimately comes from , named after a fire-controlling deity of a firefighting shrine built after the area was destroyed by a fire in 1869. Akihabara gained the nickname shortly after World War II for being a major shopping center for household electronic goods and the post-war black market. Akihabara is considered by many to be the epicentre of modern Japanese otaku culture, and is a major shopping district for video games, anime, manga, electronics and computer-related goods. Icons from popular anime and manga are displayed prominently on the shops in the area, and numerous maid cafés and some arcades are found throughout the district. Geography The main area of Akihabara is located on a street just west of Akihabara Station, where most of the major shops are situated. Most of the electronics shops are just west of the station, and the anime and manga shops and the cosplay cafés are north of them. As mentioned above, the area called Akihabara now ranges over some districts in Chiyoda ward: , , and . There exists an administrative district called Akihabara in the Taitō ward further north of the station, but it is not the place which people generally refer to as Akihabara. It borders on Sotokanda in between Akihabara and Okachimachi stations, but is half occupied by JR tracks. History The area that is now Akihabara was once near a city gate of Edo and served as a passage between the city and northwestern Japan. This made the region a home to many craftsmen and tradesmen, as well as some low-class samurai. One of Tokyo's frequent fires destroyed the area in 1869, and the people decided to replace the buildings of the area with a shrine called Chinkasha (now known as Akiba Shrine ), meaning fire extinguisher shrine, in an attempt to prevent the spread of future fires. The locals nicknamed the shrine Akiba after the deity that could control fire, and the area around it became known as Akibagahara and later Akihabara. After Akihabara Station was built in 1888, the shrine was moved to the Taitō ward where it still resides today. Since its opening in 1890, Akihabara Station became a major freight transit point, which allowed a vegetable and fruit market to spring up in the district. Then, in the 1920s, the station saw a large volume of passengers after opening for public transport, and after World War II, the black market thrived in the absence of a strong government. This disconnection of Akihabara from government authority has allowed the district to grow as a market city and given rise to an excellent atmosphere for entrepreneurship. In the 1930s, this climate turned Akihabara into a future-oriented market region specializing in household electronics, such as washing machines, refrigerators, televisions, and stereos, earning Akihabara the nickname "Electric Town". As household electronics began to lose their futuristic appeal in the 1980s, the shops of Akihabara shifted their focus to home computers at a time when they were only used by specialists and hobbyists. This new specialization brought in a new type of consumer, computer nerds or otaku. The market in Akihabara naturally latched onto their new customer base that was focused on anime, manga, and video games. The connection between Akihabara and otaku has survived and grown to the point that the region is now known worldwide as a center for otaku culture, and some otaku even consider Akihabara to be a sacred place. Otaku culture The influence of otaku culture has shaped Akihabara's businesses and buildings to reflect the interests of otaku and gained the district worldwide fame for its distinctive imagery. Akihabara tries to create an atmosphere as close as possible to the game and anime worlds of customers' interest. The streets of Akihabara are covered with anime and manga icons, and cosplayers line the sidewalks handing out advertisements, especially for maid cafés. Release events, special events, and conventions in Akihabara give anime and manga fans frequent opportunities to meet the creators of the works they follow and strengthen the connection between the region and otaku culture. The design of many of the buildings serves to create the sort of atmosphere that draws in otaku. Architects design the stores of Akihabara to be more opaque and closed to reflect the general desire of many otaku to live in their anime worlds rather than display their interests to the world at large. Akihabara's role as a free market has also allowed a large amount of amateur work to find a passionate audience in the otaku who frequent the area. Doujinshi (amateur or fanmade manga based on an anime/manga/game) has been growing in Akihabara since the 1970s when publishers began to drop manga that were not ready for large markets. Comiket is largest spot sale of Doujinshi in Japan. See also Akiba-kei Akihabara Trilogy Kanda Shrine, Shinto shrine near Akihabara Nipponbashi, in Osaka Ōsu, in Nagoya Tourism in Japan References External links Akihabara Area Tourism Organization Akihabara Electrical Town Organization website Go Tokyo Akihabara Guide Chiyoda, Tokyo Electronics districts Neighborhoods of Tokyo Otaku Shopping districts and streets in Japan Taitō Tourist attractions in Tokyo Akiha faith
2807
https://en.wikipedia.org/wiki/Active%20Directory
Active Directory
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. Windows Server operating systems include it as a set of processes and services. Originally, only centralized domain management used Active Directory. However, it ultimately became an umbrella title for various directory-based identity-related services. A domain controller is a server running the Active Directory Domain Service (AD DS) role. It authenticates and authorizes all users and computers in a Windows domain-type network, assigning and enforcing security policies for all computers and installing or updating software. For example, when a user logs into a computer part of a Windows domain, Active Directory checks the submitted username and password and determines whether the user is a system administrator or a non-admin user. Furthermore, it allows the management and storage of information, provides authentication and authorization mechanisms, and establishes a framework to deploy other related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services, and Rights Management Services. Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft's version of Kerberos, and DNS. Robert R. King defined it in the following way: History Like many information-technology efforts, Active Directory originated out of a democratization of design using Requests for Comments (RFCs). The Internet Engineering Task Force (IETF) oversees the RFC process and has accepted numerous RFCs initiated by widespread participants. For example, LDAP underpins Active Directory. Also, X.500 directories and the Organizational Unit preceded the Active Directory concept that uses those methods. The LDAP concept began to emerge even before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823 (on the LDAP API, August 1995), RFC 2307, RFC 3062, and RFC 4533. Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, and revised it to extend functionality and improve administration in Windows Server 2003. Active Directory support was also added to Windows 95, Windows 98, and Windows NT 4.0 via patch, with some unsupported features. Additional improvements came with subsequent versions of Windows Server. In Windows Server 2008, Microsoft added further services to Active Directory, such as Active Directory Federation Services. The part of the directory in charge of managing domains, which was a core part of the operating system, was renamed Active Directory Domain Services (ADDS) and became a server role like others. "Active Directory" became the umbrella title of a broader range of directory-based services. According to Byron Hynes, everything related to identity was brought under Active Directory's banner. Active Directory Services Active Directory Services consist of multiple directory services. The best known is Active Directory Domain Services, commonly abbreviated as AD DS or simply AD. Domain Services Active Directory Domain Services (AD DS) is the foundation of every Windows domain network. It stores information about domain members, including devices and users, verifies their credentials, and defines their access rights. The server running this service is called a domain controller. A domain controller is contacted when a user logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app sideloaded into a machine. Other Active Directory services (excluding LDS, as described below) and most Microsoft server technologies rely on or use Domain Services; examples include Group Policy, Encrypting File System, BitLocker, Domain Name Services, Remote Desktop Services, Exchange Server, and SharePoint Server. The self-managed Active Directory DS must be distinct from managed Azure AD DS, a cloud product. Lightweight Directory Services Active Directory Lightweight Directory Services (AD LDS), previously called Active Directory Application Mode (ADAM), implements the LDAP protocol for AD DS. It runs as a service on Windows Server and offers the same functionality as AD DS, including an equal API. However, AD LDS does not require the creation of domains or domain controllers. It provides a Data Store for storing directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS, multiple AD LDS instances can operate on the same server. Certificate Services Active Directory Certificate Services (AD CS) establishes an on-premises public key infrastructure. It can create, validate, revoke and perform other similar actions, public key certificates for internal uses of an organization. These certificates can be used to encrypt files (when used with Encrypting File System), emails (per S/MIME standard), and network traffic (when used by virtual private networks, Transport Layer Security protocol or IPSec protocol). AD CS predates Windows Server 2008, but its name was simply Certificate Services. AD CS requires an AD DS infrastructure. Federation Services Active Directory Federation Services (AD FS) is a single sign-on service. With an AD FS infrastructure in place, users may use several web-based services (e.g. internet forum, blog, online shopping, webmail) or network resources using only one set of credentials stored at a central location, as opposed to having to be granted a dedicated set of credentials for each service. AD FS uses many popular open standards to pass token credentials such as SAML, OAuth or OpenID Connect. AD FS supports encryption and signing of SAML assertions. AD FS's purpose is an extension of that of AD DS: The latter enables users to authenticate with and use the devices that are part of the same network, using one set of credentials. The former enables them to use the same set of credentials in a different network. As the name suggests, AD FS works based on the concept of federated identity. AD FS requires an AD DS infrastructure, although its federation partner may not. Rights Management Services Active Directory Rights Management Services (AD RMS), previously known as Rights Management Services or RMS before Windows Server 2008, is server software that allows for information rights management, included with Windows Server. It uses encryption and selective denial to restrict access to various documents, such as corporate e-mails, Microsoft Word documents, and web pages. It also limits the operations authorized users can perform on them, such as viewing, editing, copying, saving, or printing. IT administrators can create pre-set templates for end users for convenience, but end users can still define who can access the content and what actions they can take. Logical structure Active Directory is a service comprising a database and executable code. It is responsible for managing requests and maintaining the database. The Directory System Agent is the executable part, a set of Windows services and processes that run on Windows 2000 and later. Accessing the objects in Active Directory databases is possible through various interfaces such as LDAP, ADSI, messaging API, and Security Accounts Manager services. Objects used Active Directory structures consist of information about objects classified into two categories: resources (such as printers) and security principals (which include user or computer accounts and groups). Each security principal is assigned a unique security identifier (SID). An object represents a single entity, such as a user, computer, printer, or group, along with its attributes. Some objects may even contain other objects within them. Each object has a unique name, and its definition is a set of characteristics and information by a schema, which determines the storage in the Active Directory. Administrators can extend or modify the schema using the schema object when needed. However, because each schema object is integral to the definition of Active Directory objects, deactivating or changing them can fundamentally alter or disrupt a deployment. Modifying the schema affects the entire system automatically, and new objects cannot be deleted, only deactivated. Changing the schema usually requires planning. Forests, trees, and domains In an Active Directory network, the framework that holds objects has different levels: the forest, tree, and domain. Domains within a deployment contain objects stored in a single replicable database, and the DNS name structure identifies their domains, the namespace. A domain is a logical group of network objects such as computers, users, and devices that share the same Active Directory database. On the other hand, a tree is a collection of domains and domain trees in a contiguous namespace linked in a transitive trust hierarchy. The forest is at the top of the structure, a collection of trees with a standard global catalog, directory schema, logical structure, and directory configuration. The forest is a secure boundary that limits access to users, computers, groups, and other objects. Organizational units The objects held within a domain can be grouped into organizational units (OUs). OUs can provide hierarchy to a domain, ease its administration, and can resemble the organization's structure in managerial or geographical terms. OUs can contain other OUs—domains are containers in this sense. Microsoft recommends using OUs rather than domains for structure and simplifying the implementation of policies and administration. The OU is the recommended level at which to apply group policies, which are Active Directory objects formally named group policy objects (GPOs), although policies can also be applied to domains or sites (see below). The OU is the level at which administrative powers are commonly delegated, but delegation can be performed on individual objects or attributes as well. Organizational units do not each have a separate namespace. As a consequence, for compatibility with Legacy NetBios implementations, user accounts with an identical sAMAccountName are not allowed within the same domain even if the accounts objects are in separate OUs. This is because sAMAccountName, a user object attribute, must be unique within the domain. However, two users in different OUs can have the same common name (CN), the name under which they are stored in the directory itself such as "fred.staff-ou.domain" and "fred.student-ou.domain", where "staff-ou" and "student-ou" are the OUs. In general, the reason for this lack of allowance for duplicate names through hierarchical directory placement is that Microsoft primarily relies on the principles of NetBIOS, which is a flat-namespace method of network object management that, for Microsoft software, goes all the way back to Windows NT 3.1 and MS-DOS LAN Manager. Allowing for duplication of object names in the directory, or completely removing the use of NetBIOS names, would prevent backward compatibility with legacy software and equipment. However, disallowing duplicate object names in this way is a violation of the LDAP RFCs on which Active Directory is supposedly based. As the number of users in a domain increases, conventions such as "first initial, middle initial, last name" (Western order) or the reverse (Eastern order) fail for common family names like Li (李), Smith or Garcia. Workarounds include adding a digit to the end of the username. Alternatives include creating a separate ID system of unique employee/student ID numbers to use as account names in place of actual users' names and allowing users to nominate their preferred word sequence within an acceptable use policy. Because duplicate usernames cannot exist within a domain, account name generation poses a significant challenge for large organizations that cannot be easily subdivided into separate domains, such as students in a public school system or university who must be able to use any computer across the network. Shadow groups In Microsoft's Active Directory, OUs do not confer access permissions, and objects placed within OUs are not automatically assigned access privileges based on their containing OU. It represents a design limitation specific to Active Directory, and other competing directories, such as Novell NDS, can set access privileges through object placement within an OU. Active Directory requires a separate step for an administrator to assign an object in an OU as a group member also within that OU. Using only the OU location to determine access permissions is unreliable since the entity might not have been assigned to the group object for that OU yet. A common workaround for an Active Directory administrator is to write a custom PowerShell or Visual Basic script to automatically create and maintain a user group for each OU in their Directory. The scripts run periodically to update the group to match the OU's account membership. However, they cannot instantly update the security groups anytime the directory changes, as occurs in competing directories, as security is directly implemented into the Directory. Such groups are known as shadow groups. Once created, these shadow groups are selectable in place of the OU in the administrative tools. Microsoft's Server 2008 Reference documentation mentions shadow groups but does not provide instructions on creating them. Additionally, there are no available server methods or console snap-ins for managing these groups. An organization must determine the structure of its information infrastructure by dividing it into one or more domains and top-level OUs. This decision is critical and can base on various models such as business units, geographical locations, IT service, object type, or a combination of these models. The immediate purpose of organizing OUs is to simplify administrative delegation and, secondarily, to apply group policies. It's important to note that while OUs serve as an administrative boundary, the forest itself is the only security boundary. All other domains must trust any administrator in the forest to maintain security. Partitions The Active Directory database is organized in partitions, each holding specific object types and following a particular replication pattern. Microsoft often refers to these partitions as 'naming contexts. The 'Schema' partition defines object classes and attributes within the forest. The 'Configuration' partition contains information on the physical structure and configuration of the forest (such as the site topology). Both replicate all domains in the forest. The 'Domain' partition holds all objects created in that domain and replicates only within it. Physical structure Sites are physical (rather than logical) groupings defined by one or more IP subnets. AD also defines connections, distinguishing low-speed (e.g., WAN, VPN) from high-speed (e.g., LAN) links. Site definitions are independent of the domain and OU structure and are shared across the forest. Sites play a crucial role in managing network traffic created by replication and directing clients to their nearest domain controllers (DCs). Microsoft Exchange Server 2007 uses the site topology for mail routing. Administrators can also define policies at the site level. The Active Directory information is physically held on one or more peer domain controllers, replacing the NT PDC/BDC model. Each DC has a copy of the Active Directory. Member servers joined to Active Directory that are not domain controllers are called Member Servers. In the domain partition, a group of objects acts as copies of domain controllers set up as global catalogs. These global catalog servers offer a comprehensive list of all objects located in the forest. Global Catalog servers replicate all objects from all domains to themselves, providing an international listing of entities in the forest. However, to minimize replication traffic and keep the GC's database small, only selected attributes of each object are replicated, called the partial attribute set (PAS). The PAS can be modified by modifying the schema and marking features for replication to the GC. Earlier versions of Windows used NetBIOS to communicate. Active Directory is fully integrated with DNS and requires TCP/IP—DNS. To fully operate, the DNS server must support SRV resource records, also known as service records. Replication Active Directory uses multi-master replication to synchronize changes, meaning replicas pull changes from the server where the change occurred rather than being pushed to them. The Knowledge Consistency Checker (KCC) uses defined sites to manage traffic and create a replication topology of site links. Intra-site replication occurs frequently and automatically due to change notifications, which prompt peers to begin a pull replication cycle. Replication intervals between different sites are usually less consistent and don't usually use change notifications. However, it's possible to set it up to be the same as replication between locations on the same network if needed. Each DS3, T1, and ISDN link can have a cost, and the KCC alters the site link topology accordingly. Replication may occur transitively through several site links on same-protocol site link bridges if the price is low. However, KCC automatically costs a direct site-to-site link lower than transitive connections. A bridgehead server in each zone can send updates to other DCs in the exact location to replicate changes between sites. To configure replication for Active Directory zones, activate DNS in the domain based on the site. To replicate Active Directory, Remote Procedure Calls (RPC) over IP (RPC/IP) are used. SMTP is used to replicate between sites but only for modifications in the Schema, Configuration, or Partial Attribute Set (Global Catalog) GCs. It's not suitable for reproducing the default Domain partition. Implementation Generally, a network utilizing Active Directory has more than one licensed Windows server computer. Backup and restore of Active Directory are possible for a network with a single domain controller. However, Microsoft recommends more than one domain controller to provide automatic failover protection of the directory. Domain controllers are ideally single-purpose for directory operations only and should not run any other software or role. Since certain Microsoft products, like SQL Server and Exchange, can interfere with the operation of a domain controller, isolation of these products on additional Windows servers is advised. Combining them can complicate the configuration and troubleshooting of the domain controller or the other installed software more complex. If planning to implement Active Directory, a business should purchase multiple Windows server licenses to have at least two separate domain controllers. Administrators should consider additional domain controllers for performance or redundancy and individual servers for tasks like file storage, Exchange, and SQL Server since this will guarantee that all server roles are adequately supported. One way to lower the physical hardware costs is by using virtualization. However, for proper failover protection, Microsoft recommends not running multiple virtualized domain controllers on the same physical hardware. Database The Active-Directory database, the directory store, in Windows 2000 Server uses the JET Blue-based Extensible Storage Engine (ESE98). Each domain controller's database is limited to 16 terabytes and 2 billion objects (but only 1 billion security principals). Microsoft has created NTDS databases with more than 2 billion objects. NT4's Security Account Manager could support up to 40,000 objects. It has two main tables: the data table and the link table. Windows Server 2003 added a third main table for security descriptor single instancing. Programs may access the features of Active Directory via the COM interfaces provided by Active Directory Service Interfaces. Trusting To allow users in one domain to access resources in another, Active Directory uses trusts. Trusts inside a forest are automatically created when domains are created. The forest sets the default boundaries of trust, and implicit, transitive trust is automatic for all domains within a forest. Terminology One-way trust One domain allows access to users on another domain, but the other domain does not allow access to users on the first domain. Two-way trust Two domains allow access to users on both domains. Trusted domain The domain that is trusted; whose users have access to the trusting domain. Transitive trust A trust that can extend beyond two domains to other trusted domains in the forest. Intransitive trust A one way trust that does not extend beyond two domains. Explicit trust A trust that an admin creates. It is not transitive and is one way only. Cross-link trust An explicit trust between domains in different trees or the same tree when a descendant/ancestor (child/parent) relationship does not exist between the two domains. Shortcut Joins two domains in different trees, transitive, one- or two-way. Forest trust Applies to the entire forest. Transitive, one- or two-way. Realm Can be transitive or nontransitive (intransitive), one- or two-way. External Connect to other forests or non-Active Directory domains. Nontransitive, one- or two-way. PAM trust A one-way trust used by Microsoft Identity Manager from a (possibly low-level) production forest to a (Windows Server 2016 functionality level) 'bastion' forest, which issues time-limited group memberships. Management tools Microsoft Active Directory management tools include: Active Directory Administrative Center (Introduced with Windows Server 2012 and above), Active Directory Users and Computers, Active Directory Domains and Trusts, Active Directory Sites and Services, ADSI Edit, Local Users and Groups, Active Directory Schema snap-ins for Microsoft Management Console (MMC), SysInternals ADExplorer These management tools may not provide enough functionality for efficient workflow in large environments. Some third-party tools extend the administration and management capabilities. They provide essential features for a more convenient administration process, such as automation, reports, integration with other services, etc. Unix integration Varying levels of interoperability with Active Directory can be achieved on most Unix-like operating systems (including Unix, Linux, Mac OS X or Java and Unix-based programs) through standards-compliant LDAP clients, but these systems usually do not interpret many attributes associated with Windows components, such as Group Policy and support for one-way trusts. Third parties offer Active Directory integration for Unix-like platforms, including: PowerBroker Identity Services, formerly Likewise (BeyondTrust, formerly Likewise Software) – Allows a non-Windows client to join Active Directory ADmitMac (Thursby Software Systems) Samba (free software under GPLv3) – Can act as a domain controller The schema additions shipped with Windows Server 2003 R2 include attributes that map closely enough to RFC 2307 to be generally usable. The reference implementation of RFC 2307, nss_ldap and pam_ldap provided by PADL.com, support these attributes directly. The default schema for group membership complies with RFC 2307bis (proposed). Windows Server 2003 R2 includes a Microsoft Management Console snap-in that creates and edits the attributes. An alternative option is to use another directory service as non-Windows clients authenticate to this while Windows Clients authenticate to Active Directory. Non-Windows clients include 389 Directory Server (formerly Fedora Directory Server, FDS), ViewDS v7.2 XML Enabled Directory, and Sun Microsystems Sun Java System Directory Server. The latter two are both able to perform two-way synchronization with Active Directory and thus provide a "deflected" integration. Another option is to use OpenLDAP with its translucent overlay, which can extend entries in any remote LDAP server with additional attributes stored in a local database. Clients pointed at the local database see entries containing both the remote and local attributes, while the remote database remains completely untouched. Administration (querying, modifying, and monitoring) of Active Directory can be achieved via many scripting languages, including PowerShell, VBScript, JScript/JavaScript, Perl, Python, and Ruby. Free and non-free Active Directory administration tools can help to simplify and possibly automate Active Directory management tasks. Since October 2017 Amazon AWS offers integration with Microsoft Active Directory. See also AGDLP (implementing role based access controls using nested groups) Apple Open Directory Flexible single master operation FreeIPA List of LDAP software System Security Services Daemon (SSSD) Univention Corporate Server References External links Microsoft Technet: White paper: Active Directory Architecture (Single technical document that gives an overview about Active Directory.) Microsoft Technet: Detailed description of Active Directory on Windows Server 2003 Microsoft MSDN Library: [MS-ADTS]: Active Directory Technical Specification (part of the Microsoft Open Specification Promise) Active Directory Application Mode (ADAM) Microsoft MSDN: [AD-LDS]: Active Directory Lightweight Directory Services Microsoft TechNet: [AD-LDS]: Active Directory Lightweight Directory Services Microsoft MSDN: Active Directory Schema Microsoft TechNet: Understanding Schema Microsoft TechNet Magazine: Extending the Active Directory Schema Microsoft MSDN: Active Directory Certificate Services Microsoft TechNet: Active Directory Certificate Services Directory services Public key infrastructure Microsoft server technology Windows components Windows 2000
2809
https://en.wikipedia.org/wiki/Arian%20%28disambiguation%29
Arian (disambiguation)
Arianism is a nontrinitarian Christological doctrine. Arian may also refer to: Pertaining to Arius A follower of Arius, a Christian presbyter in the 3rd and 4th century Arian controversy, several controversies which divided the early Christian church Arian fragment, Arian palimpsest People Groups of people Arians or Areians, ancient people living in Ariana (origin of the modern name Iran) Aryan, a term associated with the Proto-Indo-Iranians Aryan race, the racial concept An inhabitant of Aria (today's Herat, Afghanistan), used by the ancient and medieval Greeks (as Ἄρ(ε)ιοι/Ar(e)ioi) and Romans (as Arii) Given name Arian Asllani (born 1983), American rapper known as Action Bronson Arian Bimo (born 1959), Albanian footballer Arian Çuliqi, Albanian television director and screenwriter Arian Foster (born 1986), American football player Arian Hametaj (born 1957), Albanian footballer Arián Iznaga, Cuban Paralympian sprinter Arian Kabashi (born 1996), Kosovan footballer Arian Kabashi (born 1997), Swedish footballer Arlan Lerio (born 1976), Filipino boxer Arian Leviste (born 1970), American electronic music artist, producer, and DJ Arian Moayed (born 1980), Iranian-born American actor and theater producer Arian Moreno (born 2003), Venezuelan footballer Surname Arian is a surname that originated in Ancient Persia Arman Arian (born 1981), Iranian author, novelist and researcher Asher Arian (1938–2010), American political scientist Asma Arian, German-Qatari human rights activist Laila Al-Arian (born 1980s), American broadcast journalist Sami Al-Arian (born 1958), Palestinian-American civil rights activist Praskovia Arian (1864–1949) Russian and Soviet writer and feminist Bruce Arians (born 1952), American football coach and former player Jake Arians (born 1978), American football player Other Arian (band), a pop band in Iran Arian (newspaper), an Iranian newspaper since 1914 Arian, an outsider's name for a member of the Polish Brethren Arian, a person born under the constellation Aries (astrology) See also Arian Kartli, ancient Georgian country Al-Arian, an Arab village in northern Israel Aaryan, a given name and surname Ariane (disambiguation), the French spelling of Ariadne, a character in Greek mythology Ariann Black, Canadian-American female magician Ariano (disambiguation) Arien (disambiguation) Arius (disambiguation) Ariyan A. Johnson Arrian, Greek historian Aryan (name) Ghamar Ariyan
2813
https://en.wikipedia.org/wiki/Aragonese%20language
Aragonese language
Aragonese ( ; in Aragonese) is a Romance language spoken in several dialects by about 12,000 people as of 2011, in the Pyrenees valleys of Aragon, Spain, primarily in the comarcas of Somontano de Barbastro, Jacetania, Alto Gállego, Sobrarbe, and Ribagorza/Ribagorça. It is the only modern language which survived from medieval Navarro-Aragonese in a form distinct from Spanish. Historically, people referred to the language as ('talk' or 'speech'). Native Aragonese people usually refer to it by the names of its local dialects such as (from Valle de Hecho) or (from the Benasque Valley). History Aragonese, which developed in portions of the Ebro basin, can be traced back to the High Middle Ages. It spread throughout the Pyrenees to areas where languages similar to modern Basque might have been previously spoken. The Kingdom of Aragon (formed by the counties of Aragon, Sobrarbe and Ribagorza) expanded southward from the mountains, pushing the Moors farther south in the Reconquista and spreading the Aragonese language. The union of the Catalan counties and the Kingdom of Aragon which formed the 12th-century Crown of Aragon did not merge the languages of the two territories; Catalan continued to be spoken in the east and Navarro-Aragonese in the west, with the boundaries blurred by dialectal continuity. The Aragonese Reconquista in the south ended with the cession of Murcia by James I of Aragon to the Kingdom of Castile as dowry for an Aragonese princess. The best-known proponent of the Aragonese language was Johan Ferrandez d'Heredia, the Grand Master of the Knights Hospitaller in Rhodes at the end of the 14th century. He wrote an extensive catalog of works in Aragonese and translated several works from Greek into Aragonese (the first in medieval Europe). The spread of Castilian (Spanish), the Castilian origin of the Trastámara dynasty, and the similarity between Castilian (Spanish) and Aragonese facilitated the recession of the latter. A turning point was the 15th-century coronation of the Castilian Ferdinand I of Aragon, also known as Ferdinand of Antequera. In the early 18th century, after the defeat of the allies of Aragon in the War of the Spanish Succession, Philip V ordered the prohibition of the Aragonese language in the schools and the establishment of Castilian (Spanish) as the only official language in Aragon. This was ordered in the Aragonese Nueva Planta decrees of 1707. In recent times, Aragonese was mostly regarded as a group of rural dialects of Spanish. Compulsory education undermined its already weak position; for example, pupils were punished for using it. However, the 1978 Spanish transition to democracy heralded literary works and studies of the language. Modern Aragonese Aragonese is the native language of the Aragonese mountain ranges of the Pyrenees, in the comarcas of Somontano, Jacetania, Sobrarbe, and Ribagorza. Cities and towns in which Aragonese is spoken are Huesca, Graus, Monzón, Barbastro, Bielsa, Chistén, Fonz, Echo, Estadilla, Benasque, Campo, Sabiñánigo, Jaca, Plan, Ansó, Ayerbe, Broto, and El Grado. It is spoken as a second language by inhabitants of Zaragoza, Huesca, Ejea de los Caballeros, or Teruel. According to recent polls, there are about 25,500 speakers (2011) including speakers living outside the native area. In 2017, the Dirección General de Política Lingüística de Aragón estimated there were 10,000 to 12,000 active speakers of Aragonese. In 2009, the Languages Act of Aragon (Law 10/2009) recognized the "native language, original and historic" of Aragon. The language received several linguistic rights, including its use in public administration. Some of the legislation was repealed by a new law in 2013 (Law 3/2013). [See Languages Acts of Aragon for more information on the subject] Dialects Western dialect: Ansó, Valle de Hecho, Chasa, Berdún, Chaca Central dialect: Panticosa, Biescas, Torla, Broto, Bielsa, Yebra de Basa, Aínsa-Sobrarbe Eastern dialect: Benás, Plan, Bisagorri, Campo, Perarrúa, Graus, Estadilla Southern dialect: Agüero, Ayerbe, Rasal, Bolea, Lierta, Uesca, Almudévar, Nozito, Labata, Alguezra, Angüés, Pertusa, Balbastro, Nabal Phonology Traits Aragonese has many historical traits in common with Catalan. Some are conservative features that are also shared with the Astur-Leonese languages and Galician-Portuguese, where Spanish innovated in ways that did not spread to nearby languages. Shared with Catalan Romance initial f- is preserved, e.g. > ('son', Sp. , Cat. , Pt. ). Romance groups cl-, fl- and pl- are preserved and in most dialects do not undergo any change, e.g. clavis > clau ('key', Sp. llave, Cat. clau, Pt. chave). However, in some transitional dialects from both sides (Ribagorzano in Aragonese and Ribagorçà and Pallarès in Catalan) it becomes cll-, fll- and pll-, e.g. clavis > cllau. Romance palatal approximant (ge-, gi-, i-) consistently became medieval , as in medieval Catalan and Portuguese. This becomes modern ch , as a result of the devoicing of sibilants (see below). In Spanish, the medieval result was either /, (modern ), , or nothing, depending on the context. e.g. > ('young man', Sp. , Cat. ), > ('to freeze', Sp. , Cat. ). Romance groups -lt-, -ct- result in , e.g. > ('done', Sp. , Cat. , Gal./Port. ), > ('many, much', Sp. , Cat. , Gal. , Port. ). Romance groups -x-, -ps-, scj- result in voiceless palatal fricative ix , e.g. > ('crippled', Sp. , Cat. ). Romance groups -lj-, -c'l-, -t'l- result in palatal lateral ll , e.g. > ('woman', Sp. , Cat. ), > ('needle', Sp. , Cat. ). Shared with Catalan and Spanish Open o, e from Romance result systematically in diphthongs , , e.g. > ('old woman', Sp. , Cat. , Pt. ). This includes before a palatal approximant, e.g. > ('eight', Sp. , Cat. , Pt. oito). Spanish diphthongizes except before yod, whereas Catalan only diphthongizes before yod. Voiced stops may be lenited to approximants . Shared with Spanish Loss of final unstressed -e but not -o, e.g. > ('big'), > ('done'). Catalan loses both -e and -o (Cat. , ); Spanish preserves -o and sometimes -e (Sp. , ~ ). Former voiced sibilants become voiceless (, ). The palatal is most often realized as a fricative . Shared with neither Latin -b- is maintained in past imperfect endings of verbs of the second and third conjugations: ('he had', Sp. , Cat. ), ('he was sleeping', Sp. , Cat. ). High Aragonese dialects () and some dialects of Gascon have preserved the voicelessness of many intervocalic stop consonants, e.g. > ('sheep hurdle', Cat. , Fr. ), > ('crested lark', Sp. , Cat. ). Several Aragonese dialects maintain Latin -ll- as geminate . The mid vowels can be as open as , mainly in the Benasque dialect. No native word can begin with an , a trait shared with Gascon and Basque. Vowels Consonants Orthography In 2010, the Academia de l'Aragonés (founded in 2006) established an orthographic standard to modernize medieval orthography and to make it more etymological. The new orthography is used by the Aragonese Wikipedia. Aragonese had two orthographic standards: The , codified in 1987 by the Consello d'a Fabla Aragonesa (CFA) at a convention in Huesca, is used by most Aragonese writers. It has a more uniform system of assigning letters to phonemes, with less regard for etymology; words traditionally written with and are uniformly written with in the Uesca system. Similarly, , , and before and are all written . It uses letters associated with Spanish, such as . The , devised in 2004 by the Sociedat de Lingüistica Aragonesa (SLA), is used by some Aragonese writers. It uses etymological forms which are closer to Catalan, Occitan, and medieval Aragonese sources; trying to come closer to the original Aragonese and the other Occitano-Romance languages. In the SLA system , ,, , and before and are distinct, and the digraph replaces . During the 16th century, Aragonese Moriscos wrote aljamiado texts (Romance texts in Arabic script), possibly because of their inability to write in Arabic. The language in these texts has a mixture of Aragonese and Castilian traits, and they are among the last known written examples of the Aragonese formerly spoken in central and southern Aragon. In 2023, a new orthographic standard has been published by the Academia Aragonesa de la Lengua. This version is close to the Academia de l'Aragonés orthography, but with the following differences: is always spelled cu, e. g. cuan, cuestión (exception is made for some loanwords: quad, quadrívium, quark, quásar, quáter, quórum); is spelled ny or ñ by personal preference; final z is not written as tz. The marginal phoneme (only in loanwords, e. g. jabugo) is spelled j in the Uesca, Academia de l'Aragonés and Academia Aragonesa de la Lengua standards (not mentioned in the SLA standard). Additionally, the Academia de l'Aragonés and Academia Aragonesa de la Lengua orthographies allow the letter j in some loanwords internationally known with it (e. g. jazz, jacuzzi, which normally have in the Aragonese pronunciation) and also mention the letters k and w, also used only in loanwords (w may represent or ). Grammar Aragonese grammar has a lot in common with Occitan and Catalan, but also Spanish. Articles The definite article in Aragonese has undergone dialect-related changes, with definite articles in Old Aragonese similar to their present Spanish equivalents. There are two main forms: These forms are used in the eastern and some central dialects. These forms are used in the western and some central dialects. Lexicology Neighboring Romance languages have influenced Aragonese. Catalan and Occitan influenced Aragonese for many years. Since the 15th century, Spanish has most influenced Aragonese; it was adopted throughout Aragon as the first language, limiting Aragonese to the northern region surrounding the Pyrenees. French has also influenced Aragonese; Italian loanwords have entered through other languages (such as Catalan), and Portuguese words have entered through Spanish. Germanic words came with the conquest of the region by Germanic peoples during the fifth century, and English has introduced a number of new words into the language. Gender Words that were part of the Latin second declension—as well as words that joined it later on—are usually masculine: > ('son') + > ('squirrel') Words that were part of the Latin first declension are usually feminine: > ('daughter'). Some Latin neuter plural nouns joined the first declension as singular feminine nouns: > ('leaf'). Words ending in -or are feminine: , , , and (in Medieval Aragonese) The names of fruit trees usually end in -era (a suffix derived from Latin -aria) and are usually feminine: a perera, a manzanera, a nuquera, , / , a olivera, a ciresera, l'almendrera The genders of river names vary: Many ending in -a are feminine: /, , , , , , , , etc. The last was known as during the 16th century. Many from the second and the third declension are masculine: L'Ebro, O Galligo, , . Pronouns Just like most other Occitano-Romance languages, Aragonese has partitive and locative clitic pronouns derived from the Latin and : / and //; unlike Ibero-Romance. Such pronouns are present in most major Romance languages (Catalan and , Occitan and , French and , and Italian and /). / is used for: Partitive objects: ("I haven't seen anything like that", literally 'Not (of it) I have seen like that'). Partitive subjects: ("It hurts so much", literally '(of it) it causes so much of pain') Ablatives, places from which movements originate: ("Memory goes away", literally '(away from [the mind]) memory goes') // is used for: Locatives, where something takes place: ("There was one of them"), literally '(Of them) there was one') Allatives, places that movements go towards or end: ('Go there (imperative)') Literature Aragonese was not written until the 12th and 13th centuries; the history , , , and date from this period; there is also an Aragonese version of the Chronicle of the Morea, differing also in its content and written in the late 14th century called . Early modern period Since 1500, Spanish has been the cultural language of Aragon; many Aragonese wrote in Spanish, and during the 17th century the Argensola brothers went to Castile to teach Spanish. Aragonese became a popular village language. During the 17th century, popular literature in the language began to appear. In a 1650 Huesca literary contest, Aragonese poems were submitted by Matías Pradas, Isabel de Rodas and "Fileno, montañés". Contemporary literature The 19th and 20th centuries have seen a renaissance of Aragonese literature in several dialects. In 1844, Braulio Foz's novel was published in the Almudévar (southern) dialect. The 20th century featured Domingo Miral's costumbrist comedies and Veremundo Méndez Coarasa's poetry, both in Hecho (western) Aragonese; Cleto Torrodellas' poetry and Tonón de Baldomera's popular writings in the Graus (eastern) dialect and Arnal Cavero's costumbrist stories and Juana Coscujuela's novel , also in the southern dialect. Aragonese in modern education The 1997 Aragonese law of languages stipulated that Aragonese (and Catalan) speakers had a right to the teaching of and in their own language. Following this, Aragonese lessons started in schools in the 1997–1998 academic year. It was originally taught as an extra-curricular, non-evaluable voluntary subject in four schools. However, whilst legally schools can choose to use Aragonese as the language of instruction, as of the 2013–2014 academic year, there are no recorded instances of this option being taken in primary or secondary education. In fact, the only current scenario in which Aragonese is used as the language of instruction is in the Aragonese philology university course, which is optional, taught over the summer and in which only some of the lectures are in Aragonese. Pre-school education In pre-school education, students whose parents wish them to be taught Aragonese receive between thirty minutes to one hour of Aragonese lessons a week. In the 2014–2015 academic year there were 262 students recorded in pre-school Aragonese lessons. Primary school education The subject of Aragonese now has a fully developed curriculum in primary education in Aragon. Despite this, in the 2014–2015 academic year there were only seven Aragonese teachers in the region across both pre-primary and primary education and none hold permanent positions, whilst the number of primary education students receiving Aragonese lessons was 320. As of 2017 there were 1068 reported Aragonese language students and 12 Aragonese language instructors in Aragon. Secondary school education There is no officially approved program or teaching materials for the Aragonese language at the secondary level, and though two non-official textbooks are available ( (Benítez, 2007) and (Campos, 2014)) many instructors create their own learning materials. Further, most schools with Aragonese programs that have the possibility of being offered as an examinative subject have elected not to do so. As of 2007 it is possible to use Aragonese as a language of instruction for multiple courses; however, no program is yet to instruct any curricular or examinative courses in Aragonese. As of the 2014–2015 academic year there were 14 Aragonese language students at the secondary level. Higher education Aragonese is not currently a possible field of study for a bachelor's or postgraduate degree in any official capacity, nor is Aragonese used as a medium of instruction. A bachelor's or master's degree may be obtained in Magisterio (teaching) at the University of Zaragoza; however, no specialization in Aragonese language is currently available. As such those who wish to teach Aragonese at the pre-school, primary, or secondary level must already be competent in the language by being a native speaker or by other means. Further, prospective instructors must pass an ad hoc exam curated by the individual schools at which they wish to teach in order to prove their competence, as there are no recognized standard competency exams for the Aragonese language. Since the 1994–1995 academic year, Aragonese has been an elective subject within the bachelor's degree for primary school education at the University of Zaragoza's Huesca campus. The University of Zaragoza's Huesca campus also offers a Diploma de Especialización (These are studies that require a previous university degree and have a duration of between 30 and 59 ECTS credits.) in Aragonese Philology with 37 ECTS credits. Academia de l'Aragonés Arredol – Electronic Aragonese newspaper Rosario Ustáriz Borra References Further reading External links Catalogue of Aragonese publications Academia de l'Aragonés Consello d'a Fabla Aragonesa Ligallo de Fablans de l'Aragonés A.C. Nogará Sociedat de Lingüistica Aragonesa Aragonese language Aragonese culture Pyrenean-Mozarabic languages Subject–verb–object languages
2815
https://en.wikipedia.org/wiki/Advanced%20Mobile%20Phone%20System
Advanced Mobile Phone System
Advanced Mobile Phone System (AMPS) was an analog mobile phone system standard originally developed by Bell Labs and later modified in a cooperative effort between Bell Labs and Motorola. It was officially introduced in the Americas on October 13, 1983, and was deployed in many other countries too, including Israel in 1986, Australia in 1987, Singapore in 1988, and Pakistan in 1990. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. As of February 18, 2008, carriers in the United States were no longer required to support AMPS and companies such as AT&T and Verizon Communications have discontinued this service permanently. AMPS was discontinued in Australia in September 2000, in Pakistan by October 2004, in Israel by January 2010, and Brazil by 2010. History The first cellular network efforts began at Bell Labs and with research conducted at Motorola. In 1960, John F. Mitchell became Motorola's chief engineer for its mobile-communication products, and oversaw the development and marketing of the first pager to use transistors. Motorola had long produced mobile telephones for automobiles, but these large and heavy models consumed too much power to allow their use without the automobile's engine running. Mitchell's team, which included the gifted Dr. Martin Cooper, developed portable cellular telephony. Cooper and Mitchell were among the Motorola employees granted a patent for this work in 1973. The first call on the prototype connected, reportedly, to a wrong number. While Motorola was developing a cellular phone, from 1968 to 1983 Bell Labs worked out a system called Advanced Mobile Phone System (AMPS), which became the first cellular network standard in the United States. The first system was successfully deployed in Chicago, Illinois, in 1979. Motorola and others designed and built the cellular phones for this and other cellular systems. Martin Cooper, a former general manager for the systems division at Motorola, led a team that produced the first cellular handset in 1973 and made the first phone call from it. In 1983 Motorola introduced the DynaTAC 8000x, the first commercially available cellular phone small enough to be easily carried. He later introduced the so-called Bag Phone. In 1992, the first smartphone, called IBM Simon, used AMPS. Frank Canova led its design at IBM and it was demonstrated that year at the COMDEX computer-industry trade-show. A refined version of the product was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. The Simon was the first device that can be properly referred to as a "smartphone", even though that term was not yet coined. Technology AMPS is a first-generation cellular technology that uses separate frequencies, or "channels", for each conversation. It therefore required considerable bandwidth for a large number of users. In general terms, AMPS was very similar to the older "0G" Improved Mobile Telephone Service it replaced, but used considerably more computing power to select frequencies, hand off conversations to land lines, and handle billing and call setup. What really separated AMPS from older systems is the "back end" call setup functionality. In AMPS, the cell centers could flexibly assign channels to handsets based on signal strength, allowing the same frequency to be re-used, without interference, if locations were separated enough. The channels were grouped so a specific set was different of the one used on the cell nearby. This allowed a larger number of phones to be supported over a geographical area. AMPS pioneers coined the term "cellular" because of its use of small hexagonal "cells" within a system. AMPS suffered from many weaknesses compared to today's digital technologies. As an analog standard, it was susceptible to static and noise, and there was no protection from 'eavesdropping' using a scanner or an older TV set that could tune into channels 70-83. Cloning In the 1990s, an epidemic of "cloning" cost the cellular carriers millions of dollars. An eavesdropper with specialized equipment could intercept a handset's ESN (Electronic Serial Number) and MDN or CTN (Mobile Directory Number or Cellular Telephone Number). The Electronic Serial Number, a 12-digit number sent by the handset to the cellular system for billing purposes, uniquely identified that phone on the network. The system then allowed or disallowed calls and/or features based on its customer file. A person intercepting an ESN/MDN pair could clone the combination onto a different phone and use it in other areas for making calls without paying. Cellular phone cloning became possible with off-the-shelf technology in the 1990s. Would-be cloners required three key items : A radio receiver, such as the Icom PCR-1000, that could tune into the Reverse Channel (the frequency on which AMPS phones transmit data to the tower) A PC with a sound card and a software program called Banpaia A phone that could easily be used for cloning, such as the Oki 900 The radio, when tuned to the proper frequency, would receive the signal transmitted by the cell phone to be cloned, containing the phone's ESN/MDN pair. This signal would feed into the sound-card audio-input of the PC, and Banpaia would decode the ESN/MDN pair from this signal and display it on the screen. The hacker could then copy that data into the Oki 900 phone and reboot it, after which the phone network could not distinguish the Oki from the original phone whose signal had been received. This gave the cloner, through the Oki phone, the ability to use the mobile-phone service of the legitimate subscriber whose phone was cloned – just as if that phone had been physically stolen, except that the subscriber retained his or her phone, unaware that the phone had been cloned—at least until that subscriber received his or her next bill. The problem became so large that some carriers required the use of a PIN before making calls. Eventually, the cellular companies initiated a system called RF Fingerprinting, whereby it could determine subtle differences in the signal of one phone from another and shut down some cloned phones. Some legitimate customers had problems with this though if they made certain changes to their own phone, such as replacing the battery and/or antenna. The Oki 900 could listen in to AMPS phone-calls right out-of-the-box with no hardware modifications. Standards AMPS was originally standardized by American National Standards Institute (ANSI) as EIA/TIA/IS-3. EIA/TIA/IS-3 was superseded by EIA/TIA-553 and TIA interim standard with digital technologies, the cost of wireless service is so low that the problem of cloning has virtually disappeared. Frequency bands AMPS cellular service operated in the 850 MHz Cellular band. For each market area, the United States Federal Communications Commission (FCC) allowed two licensees (networks) known as "A" and "B" carriers. Each carrier within a market used a specified "block" of frequencies consisting of 21 control channels and 395 voice channels. Originally, the B (wireline) side license was usually owned by the local phone company, and the A (non-wireline) license was given to wireless telephone providers. At the inception of cellular in 1983, the FCC had granted each carrier within a market 333 channel pairs (666 channels total). By the late 1980s, the cellular industry's subscriber base had grown into the millions across America and it became necessary to add channels for additional capacity. In 1989, the FCC granted carriers an expansion from the previous 666 channels to the final 832 (416 pairs per carrier). The additional frequencies were from the band held in reserve for future (inevitable) expansion. These frequencies were immediately adjacent to the existing cellular band. These bands had previously been allocated to UHF TV channels 70–83. Each duplex channel was composed of 2 frequencies. 416 of these were in the 824–849 MHz range for transmissions from mobile stations to the base stations, paired with 416 frequencies in the 869–894 MHz range for transmissions from base stations to the mobile stations. Each cell site used a different subset of these channels than its neighbors to avoid interference. This significantly reduced the number of channels available at each site in real-world systems. Each AMPS channel had a one way bandwidth of 30 kHz, for a total of 60 kHz for each duplex channel. Laws were passed in the US which prohibited the FCC type acceptance and sale of any receiver which could tune the frequency ranges occupied by analog AMPS cellular services. Though the service is no longer offered, these laws remain in force (although they may no longer be enforced). Narrowband AMPS In 1991, Motorola proposed an AMPS enhancement known as narrowband AMPS (NAMPS or N-AMPS). Digital AMPS Later, many AMPS networks were partially converted to D-AMPS, often referred to as TDMA (though TDMA is a generic term that applies to many 2G cellular systems). D-AMPS, commercially deployed since 1993, was a digital, 2G standard used mainly by AT&T Mobility and U.S. Cellular in the United States, Rogers Wireless in Canada, Telcel in Mexico, Telecom Italia Mobile (TIM) in Brazil, VimpelCom in Russia, Movilnet in Venezuela, and Cellcom in Israel. In most areas, D-AMPS is no longer offered and has been replaced by more advanced digital wireless networks. Successor technologies AMPS and D-AMPS have now been phased out in favor of either CDMA2000 or GSM, which allow for higher capacity data transfers for services such as WAP, Multimedia Messaging System (MMS), and wireless Internet access. There are some phones capable of supporting AMPS, D-AMPS and GSM all in one phone (using the GAIT standard). Analog AMPS being replaced by digital In 2002, the FCC decided to no longer require A and B carriers to support AMPS service as of February 18, 2008. All AMPS carriers have converted to a digital standard such as CDMA2000 or GSM. Digital technologies such as GSM and CDMA2000 support multiple voice calls on the same channel and offer enhanced features such as two-way text messaging and data services. Unlike in the United States, the Canadian Radio-television and Telecommunications Commission (CRTC) and Industry Canada have not set any requirement for maintaining AMPS service in Canada. Rogers Wireless has dismantled their AMPS (along with IS-136) network; the networks were shut down May 31, 2007. Bell Mobility and Telus Mobility, who operated AMPS networks in Canada, announced that they would observe the same timetable as outlined by the FCC in the United States, and as a result would not begin to dismantle their AMPS networks until after February 2008. OnStar relied heavily on North American AMPS service for its subscribers because, when the system was developed, AMPS offered the most comprehensive wireless coverage in the US. In 2006, ADT asked the FCC to extend the AMPS deadline due to many of their alarm systems still using analog technology to communicate with the control centers. Cellular companies who own an A or B license (such as Verizon and Alltel) were required to provide analog service until February 18, 2008. After that point, however, most cellular companies were eager to shut down AMPS and use the remaining channels for digital services. OnStar transitioned to digital service with the help of data transport technology developed by Airbiquity, but warned customers who could not be upgraded to digital service that their service would permanently expire on January 1, 2008. Commercial deployments of AMPS by country See also History of mobile phones Citations References Interview of Joel Engel History of mobile phones Mobile radio telephone systems Telecommunications-related introductions in 1983
2819
https://en.wikipedia.org/wiki/Aerodynamics
Aerodynamics
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature. History Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations. Fundamental concepts Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields. Flow classification Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results. Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine). Continuum assumption Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow. The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. Conservation laws The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used: Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation. Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components). Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest. Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations. The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables. Branches of aerodynamics Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe. Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic. The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows. Incompressible aerodynamics An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. Subsonic flow Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions. In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics. Compressible aerodynamics According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows. Transonic flow The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic. Supersonic flow Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem. Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes. Hypersonic flow In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. Associated terminology The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence. Boundary layers The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically. Turbulence In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow. Aerodynamics in other fields Engineering design Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines. The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine. Environmental design Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems. Aerodynamic equations are used in numerical weather prediction. Ball-control in sports Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect". See also Aeronautics Aerostatics Aviation Insect flight – how bugs fly List of aerospace engineering topics List of engineering topics Nose cone design Fluid dynamics Computational fluid dynamics References Further reading General aerodynamics Subsonic aerodynamics Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. . Transonic aerodynamics Supersonic aerodynamics Hypersonic aerodynamics History of aerodynamics Aerodynamics related to engineering Ground vehicles Fixed-wing aircraft Helicopters Missiles Model aircraft Related branches of aerodynamics Aerothermodynamics Aeroelasticity Boundary layers Turbulence External links NASA Beginner's Guide to Aerodynamics Aerodynamics for Students Aerodynamics for Pilots Aerodynamics and Race Car Tuning Aerodynamic Related Projects eFluids Bicycle Aerodynamics Application of Aerodynamics in Formula One (F1) Aerodynamics in Car Racing Aerodynamics of Birds NASA Aerodynamics Index Dynamics Energy in transport
2822
https://en.wikipedia.org/wiki/Ash
Ash
Ash or ashes are the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion. Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity. Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available. Natural occurrence Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal. Specific types Wood ash Products of coal combustion Bottom ash Fly ash Cigarette or cigar ash Incinerator bottom ash, a form of ash produced in incinerators Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption. Cremation ashes Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations. See also Ash (analytical chemistry) Cinereous, consisting of ashes, ash-colored or ash-like Potash, a term for many useful potassium salts that traditionally derived from plant ashes, but today are typically mined from underground deposits coal, consisting of carbon as ash, and ash can be converted into coal carbon, basic component of ashes charcoal, carbon residue after heating wood mainly used as traditional fuel References Combustion
2823
https://en.wikipedia.org/wiki/Antiderivative
Antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and . Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval. In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference. Examples The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value . More generally, the power function has antiderivative if , and if . In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement: Uses and properties Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the integrable function over the interval , then: Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds: If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance is the most general antiderivative of on its natural domain Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary: for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus. There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are the error function the Fresnel function the sine integral the logarithmic integral function and sophomore's dream For a more detailed discussion, see also Differential Galois theory. Techniques of integration Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral. There exist many properties and techniques for finding antiderivatives. These include, among others: The linearity of integration (which breaks complicated integrals into simpler ones) Integration by substitution, often combined with trigonometric identities or the natural logarithm The inverse chain rule method (a special case of integration by substitution) Integration by parts (to integrate products of functions) Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of the antiderivative of and of ). The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials) The Risch algorithm Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem) Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of ) Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used) Cauchy formula for repeated integration (to calculate the -times antiderivative of a function) Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals. Of non-continuous functions Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that: Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives. In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable. Assuming that the domains of the functions are open intervals: A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem. The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities. If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative. If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below. Some examples Basic formulae If , then . See also Antiderivative (complex analysis) Formal antiderivative Jackson integral Lists of integrals Symbolic integration Area Notes References Further reading Introduction to Classical Real Analysis, by Karl R. Stromberg; Wadsworth, 1981 (see also) Historical Essay On Continuity Of Derivatives by Dave L. Renfro External links Wolfram Integrator — Free online symbolic integration with Mathematica Function Calculator from WIMS Integral at HyperPhysics Antiderivatives and indefinite integrals at the Khan Academy Integral calculator at Symbolab The Antiderivative at MIT Introduction to Integrals at SparkNotes Antiderivatives at Harvy Mudd College Integral calculus Linear operators in calculus
2839
https://en.wikipedia.org/wiki/Angular%20momentum
Angular momentum
In physics, angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it. The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; in other words, the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The change in angular momentum for a particular interaction is called angular impulse, sometimes twirl. Angular impulse is the angular analog of (linear) impulse. Examples The trivial case of the angular momentum of a body in an orbit is given by where is the mass of the orbiting object, is the orbit's frequency and is the orbit's radius. The angular momentum of a uniform rigid sphere rotating around its axis instead is given by where is the sphere's mass, is the frequency of rotation and is the sphere's radius. Thus, for example, the orbital angular momentum of the Earth with respect to the Sun is about 2.66 × 1040 kg⋅m2⋅s−1, while its rotational angular momentum is about 7.05 × 1033 kg⋅m2⋅s−1. In the case of a uniform rigid sphere rotating around its axis, if, instead of its mass, its density is known, the angular momentum is given by where is the sphere's density, is the frequency of rotation and is the sphere's radius. In the simplest case of a spinning disk, the angular momentum is given by where is the disk's mass, is the frequency of rotation and is the disk's radius. If instead the disk rotates about its diameter (e.g. coin toss), its angular momentum is given by Definition in classical mechanics Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The Earth has an orbital angular momentum by nature of revolving around the Sun, and a spin angular momentum by nature of its daily rotation around the polar axis. The total angular momentum is the sum of the spin and orbital angular momenta. In the case of the Earth the primary conserved quantity is the total angular momentum of the solar system because angular momentum is exchanged to a small but important extent among the planets and the Sun. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ω, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Ω, making the constant of proportionality a second-rank tensor rather than a scalar. Orbital angular momentum in two dimensions Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed angular momentum is proportional to moment of inertia and angular speed measured in radians per second. Unlike mass, which depends only on amount of matter, moment of inertia depends also on the position of the axis of rotation and the distribution of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center. In the case of circular motion of a single particle, we can use and to expand angular momentum as reducing to: the product of the radius of rotation and the linear momentum of the particle , where is the linear (tangential) speed. This simple analysis can also apply to non-circular motion if one uses the component of the motion perpendicular to the radius vector: where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed, where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, , to which the term moment of momentum refers. Scalar angular momentum from Lagrangian mechanics Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is And the potential energy is Then the Lagrangian is The generalized momentum "canonically conjugate to" the coordinate is defined by Orbital angular momentum in three dimensions To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as: where is the moment of inertia for a point mass, is the orbital angular velocity of the particle about the origin, is the position vector of the particle relative to the origin, and , is the linear velocity of the particle relative to the origin, and is the mass of the particle. This can be expanded, reduced, and by the rules of vector algebra, rearranged: which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie. By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where and where is the perpendicular component of the motion, as above. The two-dimensional scalar equations of the previous section can thus be given direction: and for circular motion, where all of the motion is perpendicular to the radius . In the spherical coordinate system the angular momentum vector expresses as Angular momentum in any number of dimensions Angular momentum can be defined in terms of the cross product only in three dimensions. Defining it as the bivector , where is the exterior product, is valid in any number of dimensions. This exterior product is equivalent to an antisymmetric tensor of degree 2, which also applies in any number of dimensions. Namely, if is a position vector and is the linear momentum vector, then we can define In the general case of summed angular momenta from multiple particles, this antisymmetric tensor has independent components (degrees of freedom), where is the number of dimensions. In the usual three-dimensional case it has 3 independent components, which allows us to identify it with a 3 dimensional pseudovector . The components of this vector relate to the components of the rank 2 tensor as follows: Analogy to linear momentum Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape. Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product, is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point, is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia. The above analogy of the translational momentum and rotational momentum can be expressed in vector form: for linear motion for rotation The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation. Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits. For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass. For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random. In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by, where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated. Similarly, for a point mass the moment of inertia is defined as, where is the radius of the point mass from the center of rotation, and for any collection of particles as the sum, Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is not done in the International system of units). The units if angular momentum can be interpreted as torque⋅time. An object with angular momentum of can be reduced to zero angular velocity by an angular impulse of . The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System. Angular momentum and torque Newton's second law of motion can be expressed mathematically, or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows: which means that the torque (i.e. the time derivative of the angular momentum) is Because the moment of inertia is , it follows that , and which, reduces to This is the rotational analog of Newton's second law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass. Conservation of angular momentum General considerations A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque about the same axis." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved). Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant. The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom. For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year. The conservation of angular momentum explains the angular acceleration of an ice skater as they bring their arms and legs close to the vertical axis of rotation. By bringing part of the mass of their body closer to the axis, they decrease their body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase. The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved. Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved. Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space. As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in their hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ω: Using this, we see that the change requires an energy of: so that a decrease in the moment of inertia requires investing energy. This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of: Let us observe a point of mass m, whose position vector relative to the center of motion is perpendicular to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is: Thus the work required for moving this point to a distance dz farther from the center of motion is: For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives: which is exactly the energy required for keeping the angular momentum conserved. Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling their hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed. In Lagrangian formalism In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is: where is the Lagrangian and is the angle around the z axis. Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to: where the subscript i stands for the i-th body, and m, vT and ωz stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively. For a body that is not point-like, with density ρ, we have instead: where integration runs over the area of the body, and Iz is the moment of inertia around the z-axis. Thus, assuming the potential energy does not depend on ωz (this assumption may fail for electromagnetic systems), we have the angular momentum of the ith object: We have thus far rotated each object by a separate angle; we may also define an overall angle θz by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum: From Euler–Lagrange equations it then follows that: Since the lagrangian is dependent upon the angles of the object only through the potential, we have: which is the torque on the ith object. Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle θz (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum: And thus the angular momentum around the z-axis is conserved. This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator. In Hamiltonian formalism Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the ith object is: which is analogous to the energy dependence upon momentum along the z-axis, . Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis: The first equation gives And so we get the same results as in the Lagrangian formalism. Note, that for combining all axes together, we write the kinetic energy as: where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors. For point-like bodies we have: This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem). Angular momentum in orbital mechanics While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame. In astrodynamics and celestial mechanics, a quantity closely related to angular momentum is defined as called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion of a body is determined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the gravitational effect of the smaller bodies on it can be neglected; it maintains, in effect, constant velocity. The motion of all bodies is affected by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions. Solid bodies Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet. For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is: and integrating this differential over the volume of the entire mass gives its total angular momentum: In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass. Collection of particles For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given, is the mass of particle , is the position vector of particle w.r.t. the origin, is the velocity of particle w.r.t. the origin, is the position vector of the center of mass w.r.t. the origin, is the velocity of the center of mass w.r.t. the origin, is the position vector of particle w.r.t. the center of mass, is the velocity of particle w.r.t. the center of mass, The total mass of the particles is simply their sum, The position vector of the center of mass is defined by, By inspection, and The total angular momentum of the collection of particles is the sum of the angular momentum of each particle, Expanding , Expanding , It can be shown that (see sidebar), and therefore the second and third terms vanish, The first term can be rearranged, and total angular momentum for the collection of particles is finally, The first term is the angular momentum of the center of mass relative to the origin. Similar to , below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to , below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body. Rearranging equation () by vector identities, multiplying both terms by "one", and grouping appropriately, gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity , Single particle case In the case of a single particle moving about the arbitrary origin, and equations () and () for total angular momentum reduce to, Case of a fixed center of mass For the case of the center of mass fixed in space with respect to the origin, and equations () and () for total angular momentum reduce to, Angular momentum in general relativity In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant. In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor: Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them. In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an anti-symmetric tensor of second order: in terms of four-vectors, namely the four-position X and the four-momentum P, and absorbs the above L together with the moment of mass, i.e., the product of the relativistic mass of the particle and its centre of mass, which can be thought of as describing the motion of its centre of mass, since mass–energy is conserved. In each of the above cases, for a system of particles the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system. Angular momentum in quantum mechanics In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator. Spin, orbital, and total angular momentum The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.) However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero), and almost all elementary particles have nonzero spin. For example electrons have "spin 1/2" (this actually means "spin ħ/2"), photons have "spin 1" (this actually means "spin ħ"), and pi-mesons have spin 0. Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, .) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have half-integer values. In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes. Quantization In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z: The reduced Planck constant is tiny by everyday standards, about 10−34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum. Quantization of angular momentum was first postulated by Niels Bohr in his model of the atom and was later predicted by Erwin Schrödinger in his Schrödinger equation. Uncertainty In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis. The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.) Total angular momentum as generator of rotations As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations". More specifically, J is defined so that the operator is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential.) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators. The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant. Angular momentum in electrodynamics When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units) where e is the electric charge of the particle and A the magnetic vector potential of the electromagnetic field. The gauge-invariant angular momentum, that is kinetic angular momentum, is given by The interplay with quantum mechanics is discussed further in the article on canonical commutation relations. Angular momentum in optics In classical Maxwell electrodynamics the Poynting vector is a linear momentum density of electromagnetic field. The angular momentum density vector is given by a vector product as in classical mechanics: The above identities are valid locally, i.e. in each space point in a given moment . Angular momentum in nature and the cosmos Tropical cyclones and other related weather phenomena involve conservation of angular momentum in order to explain the dynamics. Winds revolve slowly around low pressure systems, mainly due to the coriolis effect. If the low pressure intensifies and the slowly circulating air is drawn toward the center, the molecules must speed up in order to conserve angular momentum. By the time they reach the center, the speeds become destructive. Johannes Kepler determined the laws of planetary motion without knowledge of conservation of momentum. However, not long after his discovery their derivation was determined from conservation of angular momentum. Planets move more slowly the further they are out in their elliptical orbits, which is explained intuitively by the fact that orbital angular momentum is proportional to the radius of the orbit. Since the mass does not change and the angular momentum is conserved, the velocity drops. Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit (~3.8 cm per year) and Earth to be decelerated (by −25.858 ± 0.003″/cy²) in its rotation (the length of the day increases by ~1.7 ms per century, +2.3 ms from tidal effect and −0.6 ms from post-glacial rebound). The Earth loses angular momentum which is transferred to the Moon such that the overall angular momentum is conserved. Angular momentum in engineering and technology Examples of using conservation of angular momentum for practical advantage are abundant. In engines such as steam engines or internal combustion engines, a flywheel is needed to efficiently convert the lateral motion of the pistons to rotational motion. Inertial navigation systems explicitly use the fact that angular momentum is conserved with respect to the inertial frame of space. Inertial navigation is what enables submarine trips under the polar ice cap, but are also crucial to all forms of modern navigation. Rifled bullets use the stability provided by conservation of angular momentum to be more true in their trajectory. The invention of rifled firearms and cannons gave their users significant strategic advantage in battle, and thus were a technological turning point in history. History Isaac Newton, in the Principia, hinted at angular momentum in his examples of the first law of motion,A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.He did not further investigate angular momentum directly in the Principia, saying:From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.However, his geometric proof of the law of areas is an outstanding example of Newton's genius, and indirectly proves angular momentum conservation in the case of a central force. The Law of Areas Newton's derivation As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws. During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times. At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve. Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero. Conservation of angular momentum in the Law of Areas The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius , and that the heights of the triangles are proportional to the perpendicular component of velocity . Hence, if the area swept per unit time is constant, then by the triangular area formula , the product and therefore the product are constant: if and the base length are decreased, and height must increase proportionally. Mass is constant, therefore angular momentum is conserved by this exchange of distance and velocity. In the case of triangle SBC, area is equal to (SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore remain constant. Similarly so for each of the triangles. Another areal proof of conservation of momentum for any central force uses Mamikon's sweeping tangents theorem. After Newton Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter. In 1736 Euler, like Newton, touched on some of the equations of angular momentum in his Mechanica without further developing them. Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it. In 1799, Pierre-Simon Laplace first realized that a fixed plane was associated with rotation—his invariable plane. Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments". In 1852 Léon Foucault used a gyroscope in an experiment to display the Earth's rotation. William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:...a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward," probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications, which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries. However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English. See also Footnotes References Further reading . External links "What Do a Submarine, a Rocket and a Football Have in Common? Why the prolate spheroid is the shape for success" (Scientific American, November 8, 2010) Conservation of Angular Momentum – a chapter from an online textbook Angular Momentum in a Collision Process – derivation of the three-dimensional case Angular Momentum and Rolling Motion – more momentum theory Mechanical quantities Rotation Conservation laws Moment (physics) Angular momentum
2840
https://en.wikipedia.org/wiki/Plum%20pudding%20model
Plum pudding model
The plum pudding model is one of several historical scientific models of the atom. First proposed by J. J. Thomson in 1904 soon after the discovery of the electron, but before the discovery of the atomic nucleus, the model tried to account for two properties of atoms then known: that electrons are negatively charged subatomic particles and that atoms have no net electric charge. The plum pudding model has electrons surrounded by a volume of positive charge, like negatively charged "plums" embedded in a positively charged "pudding". Overview It had been known for many years that atoms contain negatively charged subatomic particles. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. It had also been known for many years that atoms have no net electric charge. Thomson held that atoms must also contain some positive charge that cancels out the negative charge of their electrons. Thomson published his proposed model in the March 1904 edition of the Philosophical Magazine, the leading British science journal of the day. In Thomson's view: ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Thomson's model was the first to assign a specific inner structure to an atom, though his original description did not include mathematical formulas. He had followed the work of William Thomson who had written a paper proposing a vortex atom in 1867, J.J. Thomson abandoned his 1890 "nebular atom" hypothesis, based on the vortex theory of the atom, in which atoms were composed of immaterial vortices and suggested there were similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson based his atomic model on known experimental evidence of the day, and in fact, followed Lord Kelvin's lead again as Kelvin had proposed a positive sphere atom a year earlier. Thomson's proposal, based on Kelvin's model of a positive volume charge, served to guide future experiments. The main objective of Thomson's model after its initial publication was to account for the electrically neutral and chemically varied state of the atom. Electron orbits were stable under classical mechanics. When an electron moves away from the center of the positively charged sphere it is subjected to a greater net positive inward force due to the presence of more positive charge inside its orbit (see Gauss's law). Electrons were free to rotate in rings that were further stabilized by interactions among the electrons, and spectroscopic measurements were meant to account for energy differences associated with different electron rings. As for the properties of matter, Thomson believed they arose from electrical effects. He further emphasized the need of a theory to help picture the physical and chemical aspects of an atom using the theory of corpuscles and positive charge. Thomson attempted unsuccessfully to reshape his model to account for some of the major spectral lines experimentally known for several elements. After the scientific discovery of radioactivity, Thomson decided to address it in his model by stating: ... we must face the problem of the constitution of the atom, and see if we can imagine a model which has in it the potentiality of explaining the remarkable properties shown by radio-active substances ... Thomson's model changed over the course of its initial publication, finally becoming a model with much more mobility containing electrons revolving in the dense field of positive charge rather than a static structure. Despite this, the colloquial nickname "plum pudding" was soon attributed to Thomson's model as the distribution of electrons within its positively charged region of space reminded many scientists of raisins, then called "plums", in the common English dessert, plum pudding. In 1909, Hans Geiger and Ernest Marsden conducted experiments where alpha particles were fired through thin sheets of gold. Their professor, Ernest Rutherford, expected to find results consistent with Thomson's atomic model. However, when the results were published in 1911, they instead implied the presence of a very small nucleus of positive charge at the center of each gold atom. This led to the development of the Rutherford model of the atom. Immediately after Rutherford published his results, Antonius van den Broek made the intuitive proposal that the atomic number of an atom is the total number of units of charge present in its nucleus. Henry Moseley's 1913 experiments (see Moseley's law) provided the necessary evidence to support Van den Broek's proposal. The effective nuclear charge was found to be consistent with the atomic number (Moseley found only one unit of charge difference). This work culminated in the solar-system-like Bohr model of the atom in the same year, in which a nucleus containing an atomic number of positive charges is surrounded by an equal number of electrons in orbital shells. As Thomson's model guided Rutherford's experiments, Bohr's model guided Moseley's research. The Bohr model was elaborated upon during the time of the "old quantum theory", and then subsumed by the full-fledged development of quantum mechanics. Related scientific problems As an important example of a scientific model, the plum pudding model has motivated and guided several related scientific problems. Mathematical Thomson problem A particularly useful mathematics problem related to the plum pudding model is the optimal distribution of equal point charges on a unit sphere, called the Thomson problem. The Thomson problem is a natural consequence of the plum pudding model in the absence of its uniform positive background charge. References Foundational quantum physics Atoms Electron Periodic table Obsolete theories in physics
2844
https://en.wikipedia.org/wiki/Atomic%20theory
Atomic theory
Atomic theory is the scientific theory that matter is composed of particles called atoms. The concept that matter is composed of discrete particles is an ancient idea, but gained scientific credence in the 18th and 19th centuries when scientists found it could explain the behaviors of gases and how chemical elements reacted with each other. By the end of the 19th century, atomic theory had gained widespread acceptance in the scientific community. The term "atom" comes from the Greek word atomos, which means "uncuttable". John Dalton applied the term to the basic units of mass of the chemical elements under the mistaken belief that chemical atoms are the fundamental particles in nature; it was another century before scientists realized that Dalton's so-called atoms have an underlying structure of their own. Particles which are truly indivisible are now referred to as "elementary particles". History Philosophical atomism The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures, including Greece and India. The word "atom" (; ), meaning "uncuttable", was coined by the Pre-Socratic Greek philosophers Leucippus and his pupil Democritus (460–370 BC). Democritus taught that atoms were infinite in number, uncreated, and eternal, and that the qualities of an object result from the kind of atoms that compose it. Democritus's atomism was refined and elaborated by the later Greek philosopher Epicurus (341–270 BC), and by the Roman Epicurean poet Lucretius (99–55 BC). During the Early Middle Ages, atomism was mostly forgotten in western Europe. During the 12th century, it became known again in western Europe through references to it in the newly-rediscovered writings of Aristotle. The opposing view of matter upheld by Aristotle was that matter was continuous and infinite and could be subdivided without limit. In the 14th century, the rediscovery of major ancient works describing atomist teachings, including Lucretius's De rerum natura and Diogenes Laërtius's Lives and Opinions of Eminent Philosophers, led to increased scholarly attention on the subject. Nonetheless, because atomism was associated with the philosophy of Epicureanism, which contradicted orthodox Christian teachings, belief in atoms was not considered acceptable by most European philosophers. The French Catholic priest Pierre Gassendi (1592–1655) revived Epicurean atomism with modifications, arguing that atoms were created by God and, though extremely numerous, are not infinite in number. He was the first person who used the term "molecule" to describe aggregation of atoms. Gassendi's modified theory of atoms was popularized in France by the physician François Bernier (1620–1688) and in England by the natural philosopher Walter Charleton (1619–1707). The chemist Robert Boyle (1627–1691) and the physicist Isaac Newton (1642–1727) both defended atomism and, by the end of the 17th century, the idea of an atomistic foundation of nature had become accepted by portions of the scientific community. Dalton's law of multiple proportions Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, closely associated with the work of Antoine Lavoisier, which states that the total mass in a chemical reaction remains constant (that is, the reactants have the same mass as the products). The second was the law of definite proportions. First established by the French chemist Joseph Proust in 1797 this law states that if a compound is broken down into its constituent chemical elements, then the masses of the constituents will always have the same proportions by weight, regardless of the quantity or source of the original substance. John Dalton studied data gathered by himself and other scientists and noticed a pattern that later came to be known as the law of multiple proportions. In compounds which all contain a particular element, the content of that element will differ across these compounds by ratios of small whole numbers. Dalton concluded from all this that elements react with each other in discrete and consistent units of weight. Borrowing the word from the philosophical tradition, Dalton called these units atoms. Example 1 — tin oxides: Dalton identified two oxides of tin. One is a grey powder (which Dalton referred to as the "protoxide") in which for every 100 parts of tin there is 13.5 parts of oxygen. The other oxide is a white powder (which Dalton referred to as the "deutoxide") in which for every 100 parts of tin there are 27 parts of oxygen. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide, there is one oxygen atom for every tin atom, and in the white oxide there are two oxygen atoms for every tin atom. These oxides are today known as tin(II) oxide (SnO) and tin(IV) oxide (SnO2) respectively. Example 2 — iron oxides: Dalton identified two oxides of iron. One is a black powder in which for every 100 parts of iron there are about 28 parts of oxygen. The other is a red powder in which for every 100 parts of iron there are 42 parts of oxygen. 28 and 42 form a ratio of 2:3. These oxides are today known as iron(II) oxide (better known as wüstite) and iron(III) oxide (the major constituent of rust). Their modern formulas are Fe2O2 and Fe2O3 respectively. Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid" (these compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively). Dalton understood that "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there are 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. Dalton's formulas for these compounds were N2O, NO, and NO2, essentially the same as today's. Dalton's atomic theory From the evidence provided by the law of multiple proportions Dalton developed his atomic theory. A central problem for the theory was to determine the relative weights of the atoms of various elements. The atomic weight of an element is the weight an atom of that element is compared to the weights of atoms of the other elements. Dalton and his contemporaries could not measure the absolute weight of atoms—i.e. their weight in grams—because atoms were far too small to be directly measured with the technologies that existed in the 19th century. Instead, they measured how heavy atoms of various elements were relative to atoms of hydrogen, which chemists of Dalton's day knew was the lightest element in nature. Dalton estimated the atomic weights according to the mass ratios in which they combined, with the weight of the hydrogen atom taken conventionally as unity. However, Dalton did not realize that some elements exist as molecules in their natural pure form—-e.g. pure oxygen exists as O2. He also mistakenly believed that the simplest compound between any two elements is always one atom of each (so he thought water was HO, not H2O). This, in addition to the limitations of his apparatus, flawed his results. For instance, in 1803 he believed that oxygen atoms were 5.5 times heavier than hydrogen atoms, because in water he measured 5.5 grams of oxygen for every 1 gram of hydrogen and believed the formula for water was HO. Adopting better data, in 1806 he concluded that the atomic weight of oxygen must actually be 7 rather than 5.5, and he retained this weight for the rest of his life. Others at this time had already concluded from more precise measurements that the oxygen atom must weigh 8 relative to hydrogen equals 1, if one assumes Dalton's formula for the water molecule (HO), or 16 if one assumes the modern water formula (H2O). The flaw in Dalton's theory was corrected in principle in 1811 by Amedeo Avogadro. Avogadro had proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were not atoms, but molecules consisting of two atoms each; and when combining chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule must split in two in order to form two molecules of water. This also meant that the water molecule must be H2O. Thus, Avogadro was able to offer more accurate estimates of the atomic mass of oxygen and various other elements, and made a distinction between molecules and atoms. What we now call atoms Avogadro called "elementary molecules", and what we now call molecules Avogadro called "compound molecules". Opposition to atomic theory Dalton's atomic theory was not immediately accepted by all scientists. One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton instead defined an atom as being the basic particle of any substance, which meant that "compound atoms" such as carbon dioxide could divided, as opposed to "elementary atoms". Other scientists used their own nomenclature, which only added to the general confusion. For instance, J. J. Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. A second problem was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some Scientists, notably those who ascribed to the school of positivism, argued that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they can directly observe. The anti-atomists argued that while atoms might be a useful abstraction for predicting how elements react, they do not reflect concrete reality. Such scientists were sometimes known as "equivalentists", because they preferred the theory of equivalent weights, which is a generalization of Proust's law of definite proportions. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the equivalent weight of oxygen is 8 grams. This position was eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties. Dalton's law of multiple proportions was also shown to not be a universal law when it came to organic substances. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. Isomerism Scientists soon discovered cases of substances that have the same proportional elemental composition but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 12 parts oxygen (we now know their formulas as both AgCNO). Wöhler also discovered that urea and ammonium cyanate both have the same composition (we now know their formulas are CH4N2O) but different properties. In 1830, Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. Most chemists of the 1830s and later accepted the suggestion that isomerism resulted from the differing arrangements of the same numbers and types of atoms, resulting in distinct substances. The numbers of isomers proliferated rapidly with the development of organic chemistry, especially after the introduction of atomic valence and structural theory in the 1860s. Consider, for example, pentane (C5H12). According to the theories of valence and structure, there are three possible atomic configurations for the pentane molecule, and there really are three different substances that have the same composition as pentane but different properties. Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same composition but different arrangements of their atoms in three dimensions. In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom forms bonds to other atoms in a tetrahedral arrangement. Working from this hypothesis, he could explain cases of isomerism where the relevant molecules appeared to have the same basic skeletal structure; the two molecules differed only in their three-dimensional spatial configurations, like two otherwise identical left and right hands, or two identical spirals that wind clockwise and counterclockwise. Mendeleev's periodic table Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them. For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table (eight places apart on the modern table). Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium. Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns to confirm the hypothesis that matter is made of atoms because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions. Brownian motion In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Albert Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to determine the size of atoms. Statistical mechanics In order to introduce the Ideal gas law and statistical forms of physics, it was necessary to postulate the existence of atoms. In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of molecules. In 1860, James Clerk Maxwell, who was a vocal proponent of atomism, was the first to use statistical mechanics in physics. Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of Thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics." Einstein later independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would "not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate. Discovery of subatomic particles Atoms were thought to be the smallest possible division of matter until 1897 when J. J. Thomson discovered the electron through his work on cathode rays. A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by an electric field (in addition to magnetic fields, which was already known). He concluded that these rays, rather than being a form of light, were composed of very light negatively charged particles. Thomson called these "corpuscles", but other scientists called them electrons, following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge. He measured the mass-to-charge ratio and discovered it was 1800 times smaller than that of hydrogen, the smallest atom. These corpuscles were a particle unlike any other previously known. Thomson suggested that atoms were divisible, and that the corpuscles were their building blocks. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge. This became known as the plum pudding model as the electrons were embedded in the positive charge like bits of fruit in a dried-fruit pudding, though Thomson thought the electrons moved about within the atom. Discovery of the nucleus Thomson's plum pudding model was disproved in 1909 by one of his former students, Ernest Rutherford, who discovered that most of the mass and positive charge of an atom is concentrated in a very small fraction of its volume, which he assumed to be at the very center. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model is sometimes called the "planetary model". However, Hantaro Nagaoka was quoted by Rutherford as the first to suggest a planetary atom in 1904. And planetary models had been suggested as early as 1897 such as the one by Joseph Larmor. Probably the earliest solar system model was found in an unpublished note by Ludwig August Colding in 1854 whose idea was that atoms were analogous to planetary systems that rotate and cause magnetic polarity. First steps toward a quantum physical model of the atom The planetary model of the atom had two significant shortcomings. The first is that, unlike planets orbiting a sun, electrons are charged particles. An accelerating electric charge is known to emit electromagnetic waves according to the Larmor formula in classical electromagnetism. An orbiting charge should steadily lose energy and spiral toward the nucleus, colliding with it in a small fraction of a second. The second problem was that the planetary model could not explain the highly peaked emission and absorption spectra of atoms that were observed. Quantum theory revolutionized physics at the beginning of the 20th century, when Max Planck and Albert Einstein postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of quantum atomic models such as the quantum model of Arthur Erich Haas in 1910 and the 1912 John William Nicholson quantum atomic model that quantized angular momentum as h/2. In 1913, Niels Bohr incorporated this idea into his Bohr model of the atom, in which an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy. Under this model an electron could not spiral into the nucleus because it could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra). Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties. That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons. Discovery of nuclear particles In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen nuclei being emitted from the gas (Rutherford recognized these, because he had previously obtained them bombarding hydrogen with alpha particles, and observing hydrogen nuclei in the products). Rutherford concluded that the hydrogen nuclei emerged from the nuclei of the nitrogen atoms themselves (in effect, he had split a nitrogen). From his own work and the work of his students Bohr and Henry Moseley, Rutherford knew that the positive charge of any atom could always be equated to that of an integer number of hydrogen nuclei. This, coupled with the atomic mass of many elements being roughly equivalent to an integer number of hydrogen atoms - then assumed to be the lightest particles - led him to conclude that hydrogen nuclei were singular particles and a basic constituent of all atomic nuclei. He named such particles protons. Further experimentation by Rutherford found that the nuclear mass of most atoms exceeded that of the protons it possessed; he speculated that this surplus mass was composed of previously-unknown neutrally charged particles, which were tentatively dubbed "neutrons". In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons. For his discovery of the neutron, Chadwick received the Nobel Prize in 1935. Quantum physical models of the atom In 1924, Louis de Broglie proposed that all moving particles—particularly subatomic particles such as electrons—exhibit a degree of wave-like behavior. Erwin Schrödinger, fascinated by this idea, explored whether or not the movement of an electron in an atom could be better explained as a wave rather than as a particle. Schrödinger's equation, published in 1926, describes an electron as a wave function instead of as a point particle. This approach elegantly predicted many of the spectral phenomena that Bohr's model failed to explain. Although this concept was mathematically convenient, it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is more apparent depends upon the situation. A consequence of describing electrons as waveforms is that it is mathematically impossible to simultaneously derive the position and momentum of an electron. This became known as the Heisenberg uncertainty principle after the theoretical physicist Werner Heisenberg, who first published a version of it in 1927. (Heisenberg analyzed a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Earle Hesse Kennard, Wolfgang Pauli, and Hermann Weyl.) This invalidated Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation; however, analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the dihydrogen cation. Even the helium atom—which contains just two electrons—has defied all attempts at a fully analytic treatment. See also Spectroscopy History of molecular theory Timeline of chemical element discoveries Introduction to quantum mechanics Kinetic theory of gases Atomism The Physical Principles of the Quantum Theory Footnotes Bibliography Further reading Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York. Alan J. Rocke (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro, Ohio State University Press, Columbus (open access full text at http://digital.case.edu/islandora/object/ksl%3Ax633gj985). External links Atomism by S. Mark Cohen. Atomic Theory - detailed information on atomic theory with respect to electrons and electricity. The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion Statistical mechanics Chemistry theories Foundational quantum physics Amount of substance
2847
https://en.wikipedia.org/wiki/Aung%20San%20Suu%20Kyi
Aung San Suu Kyi
Aung San Suu Kyi (; ; born 19 June 1945), sometimes abbreviated to Suu Kyi, is a Burmese politician, diplomat, author, and a 1991 Nobel Peace Prize laureate who served as State Counsellor of Myanmar (equivalent to a prime minister) and Minister of Foreign Affairs from 2016 to 2021. She has served as the general secretary of the National League for Democracy (NLD) since the party's founding in 1988, and was registered as its chairperson while it was a legal party from 2011 to 2023. She played a vital role in Myanmar's transition from military junta to partial democracy in the 2010s. The youngest daughter of Aung San, Father of the Nation of modern-day Myanmar, and Khin Kyi, Aung San Suu Kyi was born in Rangoon, British Burma. After graduating from the University of Delhi in 1964 and St Hugh's College, Oxford in 1968, she worked at the United Nations for three years. She married Michael Aris in 1972, with whom she had two children. Aung San Suu Kyi rose to prominence in the 8888 Uprising of 8 August 1988 and became the General Secretary of the NLD, which she had newly formed with the help of several retired army officials who criticized the military junta. In the 1990 elections, NLD won 81% of the seats in Parliament, but the results were nullified, as the military government (the State Peace and Development Council – SPDC) refused to hand over power, resulting in an international outcry. She had been detained before the elections and remained under house arrest for almost 15 of the 21 years from 1989 to 2010, becoming one of the world's most prominent political prisoners. In 1999, Time magazine named her one of the "Children of Gandhi" and his spiritual heir to nonviolence. She survived an assassination attempt in the 2003 Depayin massacre when at least 70 people associated with the NLD were killed. Her party boycotted the 2010 elections, resulting in a decisive victory for the military-backed Union Solidarity and Development Party (USDP). Aung San Suu Kyi became a Pyithu Hluttaw MP while her party won 43 of the 45 vacant seats in the 2012 by-elections. In the 2015 elections, her party won a landslide victory, taking 86% of the seats in the Assembly of the Union—well more than the 67% supermajority needed to ensure that its preferred candidates were elected president and second vice president in the presidential electoral college. Although she was prohibited from becoming the president due to a clause in the constitution—her late husband and children are foreign citizens—she assumed the newly created role of State Counsellor of Myanmar, a role akin to a prime minister or a head of government. When she ascended to the office of state counsellor, Aung San Suu Kyi drew criticism from several countries, organisations and figures over Myanmar's inaction in response to the genocide of the Rohingya people in Rakhine State and refusal to acknowledge that Myanmar's military has committed massacres. Under her leadership, Myanmar also drew criticism for prosecutions of journalists. In 2019, Aung San Suu Kyi appeared in the International Court of Justice where she defended the Burmese military against allegations of genocide against the Rohingya. Aung San Suu Kyi, whose party had won the November 2020 Myanmar general election, was arrested on 1 February 2021 following a coup d'état that returned the Tatmadaw (Myanmar Armed Forces) to power and sparked protests across the country. Several charges were filed against her, and on 6 December 2021, she was sentenced to four years in prison on two of them. Later, on 10 January 2022, she was sentenced to an additional four years on another set of charges. On 12 October 2022, she was convicted of two further charges of corruption and she was sentenced to two terms of three years' imprisonment to be served concurrent to each other. On 30 December 2022, her trials ended with another conviction and an additional sentence of seven years' imprisonment for corruption. Aung San Suu Kyi's final sentence was of 33 years in prison, later reduced to 27 years. The United Nations, most European countries, and the United States condemned the arrests, trials, and sentences as politically motivated. Name Aung San Suu Kyi, like other Burmese names, includes no surname, but is only a personal name, in her case derived from three relatives: "Aung San" from her father, "Suu" from her paternal grandmother, and "Kyi" from her mother Khin Kyi. In Myanmar, Aung San Suu Kyi is often referred to as Daw Aung San Suu Kyi. Daw, literally meaning "aunt", is not part of her name but is an honorific for any older and revered woman, akin to "Madam". She is sometimes addressed as Daw Suu or Amay Suu ("Mother Suu") by her supporters. Personal life Aung San Suu Kyi was born on 19 June 1945 in Rangoon (now Yangon), British Burma. According to Peter Popham, she was born in a small village outside Rangoon called Hmway Saung. Her father, Aung San, allied with the Japanese during World War II. Aung San founded the modern Burmese army and negotiated Burma's independence from the United Kingdom in 1947; he was assassinated by his rivals in the same year. She is a niece of Thakin Than Tun who was the husband of Khin Khin Gyi, the elder sister of her mother Khin Kyi. She grew up with her mother, Khin Kyi, and two brothers, Aung San Lin and Aung San Oo, in Rangoon. Aung San Lin died at the age of eight when he drowned in an ornamental lake on the grounds of the house. Her elder brother emigrated to San Diego, California, becoming a United States citizen. After Aung San Lin's death, the family moved to a house by Inya Lake where Aung San Suu Kyi met people of various backgrounds, political views, and religions. She was educated in Methodist English High School (now Basic Education High School No. 1 Dagon) for much of her childhood in Burma, where she was noted as having a talent for learning languages. She speaks four languages: Burmese, English, French, and Japanese. She is a Theravada Buddhist. Aung San Suu Kyi's mother, Khin Kyi, gained prominence as a political figure in the newly formed Burmese government. She was appointed Burmese ambassador to India and Nepal in 1960, and Aung San Suu Kyi followed her there. She studied in the Convent of Jesus and Mary School in New Delhi, and graduated from Lady Shri Ram College, a constituent college of the University of Delhi in New Delhi, with a degree in politics in 1964. Suu Kyi continued her education at St Hugh's College, Oxford, obtaining a B.A. degree in Philosophy, Politics and Economics in 1967, graduating with a third-class degree that was promoted per tradition to an MA in 1968. After graduating, she lived in New York City with family friend Ma Than E, who was once a popular Burmese pop singer. She worked at the United Nations for three years, primarily on budget matters, writing daily to her future husband, Dr. Michael Aris. On 1 January 1972, Aung San Suu Kyi and Aris, a scholar of Tibetan culture and literature, living abroad in Bhutan, were married. The following year, she gave birth to their first son, Alexander Aris, in London; their second son, Kim, was born in 1977. Between 1985 and 1987, Aung San Suu Kyi was working toward a Master of Philosophy degree in Burmese literature as a research student at the School of Oriental and African Studies (SOAS), University of London. She was elected as an Honorary Fellow of St Hugh's in 1990. For two years, she was a Fellow at the Indian Institute of Advanced Studies (IIAS) in Shimla, India. She also worked for the government of the Union of Burma. In 1988, Aung San Suu Kyi returned to Burma to tend for her ailing mother. Aris' visit in Christmas 1995 was the last time that he and Aung San Suu Kyi met, as she remained in Burma and the Burmese dictatorship denied him any further entry visas. Aris was diagnosed with prostate cancer in 1997 which was later found to be terminal. Despite appeals from prominent figures and organizations, including the United States, UN Secretary-General Kofi Annan and Pope John Paul II, the Burmese government would not grant Aris a visa, saying that they did not have the facilities to care for him, and instead urged Aung San Suu Kyi to leave the country to visit him. She was at that time temporarily free from house arrest but was unwilling to depart, fearing that she would be refused re-entry if she left, as she did not trust the military junta's assurance that she could return. Aris died on his 53rd birthday on 27 March 1999. Since 1989, when his wife was first placed under house arrest, he had seen her only five times, the last of which was for Christmas in 1995. She was also separated from her children, who live in the United Kingdom, until 2011. On 2 May 2008, after Cyclone Nargis hit Burma, Aung San Suu Kyi's dilapidated lakeside bungalow lost its roof and electricity, while the cyclone also left entire villages in the Irrawaddy delta submerged. Plans to renovate and repair the house were announced in August 2009. Aung San Suu Kyi was released from house arrest on 13 November 2010. Political career Political beginning Coincidentally, when Aung San Suu Kyi returned to Burma in 1988, the long-time military leader of Burma and head of the ruling party, General Ne Win, stepped down. Mass demonstrations for democracy followed that event on 8 August 1988 (8–8–88, a day seen as auspicious), which were violently suppressed in what came to be known as the 8888 Uprising. On 24 August 1988, she made her first public appearance at the Yangon General Hospital, addressing protestors from a podium. On 26 August, she addressed half a million people at a mass rally in front of the Shwedagon Pagoda in the capital, calling for a democratic government. However, in September 1988, a new military junta took power. Influenced by both Mahatma Gandhi's philosophy of non-violence and also by the Buddhist concepts, Aung San Suu Kyi entered politics to work for democratization, helped found the National League for Democracy on 27 September 1988, but was put under house arrest on 20 July 1989. She was offered freedom if she left the country, but she refused. Despite her philosophy of non-violence, a group of ex-military commanders and senior politicians who joined NLD during the crisis believed that she was too confrontational and left NLD. However, she retained enormous popularity and support among NLD youths with whom she spent most of her time. During the crisis, the previous democratically elected Prime Minister of Burma, U Nu, initiated to form an interim government and invited opposition leaders to join him. Indian Prime Minister Rajiv Gandhi had signaled his readiness to recognize the interim government. However, Aung San Suu Kyi categorically rejected U Nu's plan by saying "the future of the opposition would be decided by masses of the people". Ex-Brigadier General Aung Gyi, another influential politician at the time of the 8888 crisis and the first chairman in the history of the NLD, followed the suit and rejected the plan after Aung San Suu Kyi's refusal. Aung Gyi later accused several NLD members of being communists and resigned from the party. 1990 general election and Nobel Peace Prize In 1990, the military junta called a general election, in which the National League for Democracy (NLD) received 59% of the votes, guaranteeing NLD 80% of the parliament seats. Some claim that Aung San Suu Kyi would have assumed the office of Prime Minister. Instead, the results were nullified and the military refused to hand over power, resulting in an international outcry. Aung San Suu Kyi was placed under house arrest at her home on University Avenue () in Rangoon, during which time she was awarded the Sakharov Prize for Freedom of Thought in 1990, and the Nobel Peace Prize one year later. Her sons Alexander and Kim accepted the Nobel Peace Prize on her behalf. Aung San Suu Kyi used the Nobel Peace Prize's US$1.3 million prize money to establish a health and education trust for the Burmese people. Around this time, Aung San Suu Kyi chose nonviolence as an expedient political tactic, stating in 2007, "I do not hold to nonviolence for moral reasons, but for political and practical reasons." The decision of the Nobel Committee mentions: In 1995 Aung San Suu Kyi delivered the keynote address at the Fourth World Conference on Women in Beijing. 1996 attack On 9 November 1996, the motorcade that Aung San Suu Kyi was traveling in with other National League for Democracy leaders Tin Oo and Kyi Maung, was attacked in Yangon. About 200 men swooped down on the motorcade, wielding metal chains, metal batons, stones and other weapons. The car that Aung San Suu Kyi was in had its rear window smashed, and the car with Tin Oo and Kyi Maung had its rear window and two backdoor windows shattered. It is believed the offenders were members of the Union Solidarity and Development Association (USDA) who were allegedly paid Ks.500/- (@ USD $0.50) each to participate. The NLD lodged an official complaint with the police, and according to reports the government launched an investigation, but no action was taken. (Amnesty International 120297) House arrest Aung San Suu Kyi was placed under house arrest for a total of 15 years over a 21-year period, on numerous occasions, since she began her political career, during which time she was prevented from meeting her party supporters and international visitors. In an interview, she said that while under house arrest she spent her time reading philosophy, politics and biographies that her husband had sent her. She also passed the time playing the piano and was occasionally allowed visits from foreign diplomats as well as from her personal physician. Although under house arrest, Aung San Suu Kyi was granted permission to leave Burma under the condition that she never return, which she refused: "As a mother, the greater sacrifice was giving up my sons, but I was always aware of the fact that others had given up more than me. I never forget that my colleagues who are in prison suffer not only physically, but mentally for their families who have no security outside – in the larger prison of Burma under authoritarian rule." The media were also prevented from visiting Aung San Suu Kyi, as occurred in 1998 when journalist Maurizio Giuliano, after photographing her, was stopped by customs officials who then confiscated all his films, tapes and some notes. In contrast, Aung San Suu Kyi did have visits from government representatives, such as during her autumn 1994 house arrest when she met the leader of Burma, General Than Shwe and General Khin Nyunt on 20 September in the first meeting since she had been placed in detention. On several occasions during her house arrest, she had periods of poor health and as a result was hospitalized. The Burmese government detained and kept Aung San Suu Kyi imprisoned because it viewed her as someone "likely to undermine the community peace and stability" of the country, and used both Article 10(a) and 10(b) of the 1975 State Protection Act (granting the government the power to imprison people for up to five years without a trial), and Section 22 of the "Law to Safeguard the State Against the Dangers of Those Desiring to Cause Subversive Acts" as legal tools against her. She continuously appealed her detention, and many nations and figures continued to call for her release and that of 2,100 other political prisoners in the country. On 12 November 2010, days after the junta-backed Union Solidarity and Development Party (USDP) won elections conducted after a gap of 20 years, the junta finally agreed to sign orders allowing Aung San Suu Kyi's release, and her house arrest term came to an end on 13 November 2010. United Nations involvement The United Nations (UN) has attempted to facilitate dialogue between the junta and Aung San Suu Kyi. On 6 May 2002, following secret confidence-building negotiations led by the UN, the government released her; a government spokesman said that she was free to move "because we are confident that we can trust each other". Aung San Suu Kyi proclaimed "a new dawn for the country". However, on 30 May 2003 in an incident similar to the 1996 attack on her, a government-sponsored mob attacked her caravan in the northern village of Depayin, murdering and wounding many of her supporters. Aung San Suu Kyi fled the scene with the help of her driver, Kyaw Soe Lin, but was arrested upon reaching Ye-U. The government imprisoned her at Insein Prison in Rangoon. After she underwent a hysterectomy in September 2003, the government again placed her under house arrest in Rangoon. The results from the UN facilitation have been mixed; Razali Ismail, UN special envoy to Burma, met with Aung San Suu Kyi. Ismail resigned from his post the following year, partly because he was denied re-entry to Burma on several occasions. Several years later in 2006, Ibrahim Gambari, UN Undersecretary-General (USG) of Department of Political Affairs, met with Aung San Suu Kyi, the first visit by a foreign official since 2004. He also met with her later the same year. On 2 October 2007 Gambari returned to talk to her again after seeing Than Shwe and other members of the senior leadership in Naypyidaw. State television broadcast Aung San Suu Kyi with Gambari, stating that they had met twice. This was Aung San Suu Kyi's first appearance in state media in the four years since her current detention began. The United Nations Working Group for Arbitrary Detention published an Opinion that Aung San Suu Kyi's deprivation of liberty was arbitrary and in contravention of Article 9 of the Universal Declaration of Human Rights 1948, and requested that the authorities in Burma set her free, but the authorities ignored the request at that time. The U.N. report said that according to the Burmese Government's reply, "Daw Aung San Suu Kyi has not been arrested, but has only been taken into protective custody, for her own safety", and while "it could have instituted legal action against her under the country's domestic legislation ... it has preferred to adopt a magnanimous attitude, and is providing her with protection in her own interests". Such claims were rejected by Brig-General Khin Yi, Chief of Myanmar Police Force (MPF). On 18 January 2007, the state-run paper New Light of Myanmar accused Aung San Suu Kyi of tax evasion for spending her Nobel Prize money outside the country. The accusation followed the defeat of a US-sponsored United Nations Security Council resolution condemning Burma as a threat to international security; the resolution was defeated because of strong opposition from China, which has strong ties with the military junta (China later voted against the resolution, along with Russia and South Africa). In November 2007, it was reported that Aung San Suu Kyi would meet her political allies National League for Democracy along with a government minister. The ruling junta made the official announcement on state TV and radio just hours after UN special envoy Ibrahim Gambari ended his second visit to Burma. The NLD confirmed that it had received the invitation to hold talks with Aung San Suu Kyi. However, the process delivered few concrete results. On 3 July 2009, UN Secretary-General Ban Ki-moon went to Burma to pressure the junta into releasing Aung San Suu Kyi and to institute democratic reform. However, on departing from Burma, Ban Ki-moon said he was "disappointed" with the visit after junta leader Than Shwe refused permission for him to visit Aung San Suu Kyi, citing her ongoing trial. Ban said he was "deeply disappointed that they have missed a very important opportunity". Periods under detention 20 July 1989: Placed under house arrest in Rangoon under martial law that allows for detention without charge or trial for three years. 10 July 1995: Released from house arrest. 23 September 2000: Placed under house arrest. 6 May 2002: Released after 19 months. 30 May 2003: Arrested following the Depayin massacre, she was held in secret detention for more than three months before being returned to house arrest. 25 May 2007: House arrest extended by one year despite a direct appeal from U.N. Secretary-General Kofi Annan to General Than Shwe. 24 October 2007: Reached 12 years under house arrest, solidarity protests held at 12 cities around the world. 27 May 2008: House arrest extended for another year, which is illegal under both international law and Burma's own law. 11 August 2009: House arrest extended for 18 more months because of "violation" arising from the May 2009 trespass incident. 13 November 2010: Released from house arrest. 2007 anti-government protests Protests led by Buddhist monks began on 19 August 2007 following steep fuel price increases, and continued each day, despite the threat of a crackdown by the military. On 22 September 2007, although still under house arrest, Aung San Suu Kyi made a brief public appearance at the gate of her residence in Yangon to accept the blessings of Buddhist monks who were marching in support of human rights. It was reported that she had been moved the following day to Insein Prison (where she had been detained in 2003), but meetings with UN envoy Ibrahim Gambari near her Rangoon home on 30 September and 2 October established that she remained under house arrest. 2009 trespass incident On 3 May 2009, an American man, identified as John Yettaw, swam across Inya Lake to her house uninvited and was arrested when he made his return trip three days later. He had attempted to make a similar trip two years earlier, but for unknown reasons was turned away. He later claimed at trial that he was motivated by a divine vision requiring him to notify her of an impending terrorist assassination attempt. On 13 May, Aung San Suu Kyi was arrested for violating the terms of her house arrest because the swimmer, who pleaded exhaustion, was allowed to stay in her house for two days before he attempted the swim back. Aung San Suu Kyi was later taken to Insein Prison, where she could have faced up to five years' confinement for the intrusion. The trial of Aung San Suu Kyi and her two maids began on 18 May and a small number of protesters gathered outside. Diplomats and journalists were barred from attending the trial; however, on one occasion, several diplomats from Russia, Thailand and Singapore and journalists were allowed to meet Aung San Suu Kyi. The prosecution had originally planned to call 22 witnesses. It also accused John Yettaw of embarrassing the country. During the ongoing defence case, Aung San Suu Kyi said she was innocent. The defence was allowed to call only one witness (out of four), while the prosecution was permitted to call 14 witnesses. The court rejected two character witnesses, NLD members Tin Oo and Win Tin, and permitted the defence to call only a legal expert. According to one unconfirmed report, the junta was planning to, once again, place her in detention, this time in a military base outside the city. In a separate trial, Yettaw said he swam to Aung San Suu Kyi's house to warn her that her life was "in danger". The national police chief later confirmed that Yettaw was the "main culprit" in the case filed against Aung San Suu Kyi. According to aides, Aung San Suu Kyi spent her 64th birthday in jail sharing biryani rice and chocolate cake with her guards. Her arrest and subsequent trial received worldwide condemnation by the UN Secretary General Ban Ki-moon, the United Nations Security Council, Western governments, South Africa, Japan and the Association of Southeast Asian Nations, of which Burma is a member. The Burmese government strongly condemned the statement, as it created an "unsound tradition" and criticised Thailand for meddling in its internal affairs. The Burmese Foreign Minister Nyan Win was quoted in the state-run newspaper New Light of Myanmar as saying that the incident "was trumped up to intensify international pressure on Burma by internal and external anti-government elements who do not wish to see the positive changes in those countries' policies toward Burma". Ban responded to an international campaign by flying to Burma to negotiate, but Than Shwe rejected all of his requests. On 11 August 2009, the trial concluded with Aung San Suu Kyi being sentenced to imprisonment for three years with hard labour. This sentence was commuted by the military rulers to further house arrest of 18 months. On 14 August, US Senator Jim Webb visited Burma, visiting with junta leader Gen. Than Shwe and later with Aung San Suu Kyi. During the visit, Webb negotiated Yettaw's release and deportation from Burma. Following the verdict of the trial, lawyers of Aung San Suu Kyi said they would appeal against the 18-month sentence. On 18 August, United States President Barack Obama asked the country's military leadership to set free all political prisoners, including Aung San Suu Kyi. In her appeal, Aung San Suu Kyi had argued that the conviction was unwarranted. However, her appeal against the August sentence was rejected by a Burmese court on 2 October 2009. Although the court accepted the argument that the 1974 constitution, under which she had been charged, was null and void, it also said the provisions of the 1975 security law, under which she has been kept under house arrest, remained in force. The verdict effectively meant that she would be unable to participate in the elections scheduled to take place in 2010—the first in Burma in two decades. Her lawyer stated that her legal team would pursue a new appeal within 60 days. Late 2000s: International support for release Aung San Suu Kyi has received vocal support from Western nations in Europe, Australia and North and South America, as well as India, Israel, Japan the Philippines and South Korea. In December 2007, the US House of Representatives voted unanimously 400–0 to award Aung San Suu Kyi the Congressional Gold Medal; the Senate concurred on 25 April 2008. On 6 May 2008, President George W. Bush signed legislation awarding Aung San Suu Kyi the Congressional Gold Medal. She is the first recipient in American history to receive the prize while imprisoned. More recently, there has been growing criticism of her detention by Burma's neighbours in the Association of Southeast Asian Nations, particularly from Indonesia, Thailand, the Philippines and Singapore. At one point Malaysia warned Burma that it faced expulsion from ASEAN as a result of the detention of Aung San Suu Kyi. Other nations including South Africa, Bangladesh and the Maldives also called for her release. The United Nations has urged the country to move towards inclusive national reconciliation, the restoration of democracy, and full respect for human rights. In December 2008, the United Nations General Assembly passed a resolution condemning the human rights situation in Burma and calling for Aung San Suu Kyi's release—80 countries voting for the resolution, 25 against and 45 abstentions. Other nations, such as China and Russia, are less critical of the regime and prefer to cooperate only on economic matters. Indonesia has urged China to push Burma for reforms. However, Samak Sundaravej, former Prime Minister of Thailand, criticised the amount of support for Aung San Suu Kyi, saying that "Europe uses Aung San Suu Kyi as a tool. If it's not related to Aung San Suu Kyi, you can have deeper discussions with Myanmar." Vietnam, however, did not support calls by other ASEAN member states for Myanmar to free Aung San Suu Kyi, state media reported Friday, 14 August 2009. The state-run Việt Nam News said Vietnam had no criticism of Myanmar's decision 11 August 2009 to place Aung San Suu Kyi under house arrest for the next 18 months, effectively barring her from elections scheduled for 2010. "It is our view that the Aung San Suu Kyi trial is an internal affair of Myanmar", Vietnamese government spokesman Le Dung stated on the website of the Ministry of Foreign Affairs. In contrast with other ASEAN member states, Dung said Vietnam has always supported Myanmar and hopes it will continue to implement the "roadmap to democracy" outlined by its government. Nobel Peace Prize winners (Archbishop Desmond Tutu, the Dalai Lama, Shirin Ebadi, Adolfo Pérez Esquivel, Mairead Corrigan, Rigoberta Menchú, Prof. Elie Wiesel, US President Barack Obama, Betty Williams, Jody Williams and former US President Jimmy Carter) called for the rulers of Burma to release Aung San Suu Kyi to "create the necessary conditions for a genuine dialogue with Daw Aung San Suu Kyi and all concerned parties and ethnic groups to achieve an inclusive national reconciliation with the direct support of the United Nations". Some of the money she received as part of the award helped fund higher education grants to Burmese students through the London-based charity Prospect Burma. It was announced prior to the 2010 Burmese general election that Aung San Suu Kyi may be released "so she can organize her party", However, Aung San Suu Kyi was not allowed to run. On 1 October 2010 the government announced that she would be released on 13 November 2010. US President Barack Obama personally advocated the release of all political prisoners, especially Aung San Suu Kyi, during the US-ASEAN Summit of 2009. The US Government hoped that successful general elections would be an optimistic indicator of the Burmese government's sincerity towards eventual democracy. The Hatoyama government which spent 2.82 billion yen in 2008, has promised more Japanese foreign aid to encourage Burma to release Aung San Suu Kyi in time for the elections; and to continue moving towards democracy and the rule of law. In a personal letter to Aung San Suu Kyi, UK Prime Minister Gordon Brown cautioned the Burmese government of the potential consequences of rigging elections as "condemning Burma to more years of diplomatic isolation and economic stagnation". Aung San Suu Kyi met with many heads of state and opened a dialog with the Minister of Labor Aung Kyi (not to be confused with Aung San Suu Kyi). She was allowed to meet with senior members of her NLD party at the State House, however these meetings took place under close supervision. 2010 release On the evening of 13 November 2010, Aung San Suu Kyi was released from house arrest. This was the date her detention had been set to expire according to a court ruling in August 2009 and came six days after a widely criticised general election. She appeared in front of a crowd of her supporters, who rushed to her house in Rangoon when nearby barricades were removed by the security forces. Aung San Suu Kyi had been detained for 15 of the past 21 years. The government newspaper New Light of Myanmar reported the release positively, saying she had been granted a pardon after serving her sentence "in good conduct". The New York Times suggested that the military government may have released Aung San Suu Kyi because it felt it was in a confident position to control her supporters after the election. Her son Kim Aris was granted a visa in November 2010 to see his mother shortly after her release, for the first time in 10 years. He visited again on 5 July 2011, to accompany her on a trip to Bagan, her first trip outside Yangon since 2003. Her son visited again on 8 August 2011, to accompany her on a trip to Pegu, her second trip. Discussions were held between Aung San Suu Kyi and the Burmese government during 2011, which led to a number of official gestures to meet her demands. In October, around a tenth of Burma's political prisoners were freed in an amnesty and trade unions were legalised. In November 2011, following a meeting of its leaders, the NLD announced its intention to re-register as a political party to contend 48 by-elections necessitated by the promotion of parliamentarians to ministerial rank. Following the decision, Aung San Suu Kyi held a telephone conference with US President Barack Obama, in which it was agreed that Secretary of State Hillary Clinton would make a visit to Burma, a move received with caution by Burma's ally China. On 1 December 2011, Aung San Suu Kyi met with Hillary Clinton at the residence of the top-ranking US diplomat in Yangon. On 21 December 2011, Thai Prime Minister Yingluck Shinawatra met Aung San Suu Kyi in Yangoon, marking Aung San Suu Kyi's "first-ever meeting with the leader of a foreign country". On 5 January 2012, British Foreign Minister William Hague met Aung San Suu Kyi and his Burmese counterpart. This represented a significant visit for Aung San Suu Kyi and Burma. Aung San Suu Kyi studied in the UK and maintains many ties there, whilst Britain is Burma's largest bilateral donor. During Aung San Suu Kyi's visit to Europe, she visited the Swiss parliament, collected her 1991 Nobel Prize in Oslo and her honorary degree from the University of Oxford. 2012 by-elections In December 2011, there was speculation that Aung San Suu Kyi would run in the 2012 national by-elections to fill vacant seats. On 18 January 2012, Aung San Suu Kyi formally registered to contest a Pyithu Hluttaw (lower house) seat in the Kawhmu Township constituency in special parliamentary elections to be held on 1 April 2012. The seat was previously held by Soe Tint, who vacated it after being appointed Construction Deputy Minister, in the 2010 election. She ran against Union Solidarity and Development Party candidate Soe Min, a retired army physician and native of Twante Township. On 3 March 2012, at a large campaign rally in Mandalay, Aung San Suu Kyi unexpectedly left after 15 minutes, because of exhaustion and airsickness. In an official campaign speech broadcast on Burmese state television's MRTV on 14 March 2012, Aung San Suu Kyi publicly campaigned for reform of the 2008 Constitution, removal of restrictive laws, more adequate protections for people's democratic rights, and establishment of an independent judiciary. The speech was leaked online a day before it was broadcast. A paragraph in the speech, focusing on the Tatmadaw's repression by means of law, was censored by authorities. Aung San Suu Kyi also called for international media to monitor the by-elections, while publicly pointing out irregularities in official voter lists, which include deceased individuals and exclude other eligible voters in the contested constituencies. On 21 March 2012, Aung San Suu Kyi was quoted as saying "Fraud and rule violations are continuing and we can even say they are increasing." When asked whether she would assume a ministerial post if given the opportunity, she said the following: On 26 March 2012, Aung San Suu Kyi suspended her nationwide campaign tour early, after a campaign rally in Myeik (Mergui), a coastal town in the south, citing health problems due to exhaustion and hot weather. On 1 April 2012, the NLD announced that Aung San Suu Kyi had won the vote for a seat in Parliament. A news broadcast on state-run MRTV, reading the announcements of the Union Election Commission, confirmed her victory, as well as her party's victory in 43 of the 45 contested seats, officially making Aung San Suu Kyi the Leader of the Opposition in the Pyidaungsu Hluttaw. Although she and other MP-elects were expected to take office on 23 April when the Hluttaws resumed session, National League for Democracy MP-elects, including Aung San Suu Kyi, said they might not take their oaths because of its wording; in its present form, parliamentarians must vow to "safeguard" the constitution. In an address on Radio Free Asia, she said "We don't mean we will not attend the parliament, we mean we will attend only after taking the oath ... Changing that wording in the oath is also in conformity with the Constitution. I don't expect there will be any difficulty in doing it." On 2 May 2012, National League for Democracy MP-elects, including Aung San Suu Kyi, took their oaths and took office, though the wording of the oath was not changed. According to the Los Angeles Times, "Suu Kyi and her colleagues decided they could do more by joining as lawmakers than maintaining their boycott on principle." On 9 July 2012, she attended the Parliament for the first time as a lawmaker. 2015 general election On 16 June 2012, Aung San Suu Kyi was finally able to deliver her Nobel acceptance speech (Nobel lecture) at Oslo's City Hall, two decades after being awarded the peace prize. In September 2012, Aung San Suu Kyi received in person the United States Congressional Gold Medal, which is the highest Congressional award. Although she was awarded this medal in 2008, at the time she was under house arrest, and was unable to receive the medal. Aung San Suu Kyi was greeted with bipartisan support at Congress, as part of a coast-to-coast tour in the United States. In addition, Aung San Suu Kyi met President Barack Obama at the White House. The experience was described by Aung San Suu Kyi as "one of the most moving days of my life". In 2014, she was listed as the 61st-most-powerful woman in the world by Forbes. On 6 July 2012, Aung San Suu Kyi announced on the World Economic Forum's website that she wanted to run for the presidency in Myanmar's 2015 elections. The current Constitution, which came into effect in 2008, bars her from the presidency because she is the widow and mother of foreigners—provisions that appeared to be written specifically to prevent her from being eligible. The NLD won a sweeping victory in those elections, winning at least 255 seats in the House of Representatives and 135 seats in the House of Nationalities. In addition, Aung San Suu Kyi won re-election to the House of Representatives. Under the 2008 constitution, the NLD needed to win at least a two-thirds majority in both houses to ensure that its candidate would become president. Before the elections, Aung San Suu Kyi announced that even though she is constitutionally barred from the presidency, she would hold the real power in any NLD-led government. On 30 March 2016 she became Minister for the President's Office, for Foreign Affairs, for Education and for Electric Power and Energy in President Htin Kyaw's government; later she relinquished the latter two ministries and President Htin Kyaw appointed her State Counsellor, a position akin to a Prime Minister created especially for her. The position of State Counsellor was approved by the House of Nationalities on 1 April 2016 and the House of Representatives on 5 April 2016. The next day, her role as State Counsellor was established. State counsellor and foreign minister (2016–2021) As soon as she became foreign minister, she invited Chinese Foreign Minister Wang Yi, Canadian Foreign Minister Stephane Dion and Italian Foreign Minister Paolo Gentiloni in April and Japanese Foreign Minister Fumio Kishida in May and discussed how to have good diplomatic relationships with these countries. Initially, upon accepting the State Counsellor position, she granted amnesty to the students who were arrested for opposing the National Education Bill, and announced the creation of the commission on Rakhine State, which had a long record of persecution of the Muslim Rohingya minority. However, soon Aung San Suu Kyi's government did not manage with the ethnic conflicts in Shan and Kachin states, where thousands of refugees fled to China, and by 2017 the persecution of the Rohingya by the government forces escalated to the point that it is not uncommonly called a genocide. Aung San Suu Kyi, when interviewed, has denied the allegations of ethnic cleansing. She has also refused to grant citizenship to the Rohingya, instead taking steps to issue ID cards for residency but no guarantees of citizenship. Her tenure as State Counsellor of Myanmar has drawn international criticism for her failure to address her country's economic and ethnic problems, particularly the plight of the Rohingya following the 25 August 2017 ARSA attacks (described as "certainly one of the biggest refugee crises and cases of ethnic cleansing since the Second World War"), for the weakening of freedom of the press and for her style of leadership, described as imperious and "distracted and out of touch". During the COVID-19 pandemic in Myanmar, Suu Kyi chaired a National Central Committee responsible for coordinating the country's pandemic response. Response to the genocide of Rohingya Muslims and refugees In 2017, critics called for Aung San Suu Kyi's Nobel prize to be revoked, citing her silence over the genocide of Rohingya people in Myanmar. Some activists criticised Aung San Suu Kyi for her silence on the 2012 Rakhine State riots (later repeated during the 2015 Rohingya refugee crisis), and her indifference to the plight of the Rohingya, Myanmar's persecuted Muslim minority. In 2012, she told reporters she did not know if the Rohingya could be regarded as Burmese citizens. In a 2013 interview with the BBC's Mishal Husain, Aung San Suu Kyi did not condemn violence against the Rohingya and denied that Muslims in Myanmar have been subject to ethnic cleansing, insisting that the tensions were due to a "climate of fear" caused by "a worldwide perception that global Muslim power is 'very great. She did condemn "hate of any kind" in the interview. According to Peter Popham, in the aftermath of the interview, she expressed anger at being interviewed by a Muslim. Husain had challenged Aung San Suu Kyi that almost all of the impact of violence was against the Rohingya, in response to Aung San Suu Kyi's claim that violence was happening on both sides, and Peter Popham described her position on the issue as one of purposeful ambiguity for political gain. However, she said that she wanted to work towards reconciliation and she cannot take sides as violence has been committed by both sides. According to The Economist, her "halo has even slipped among foreign human-rights lobbyists, disappointed at her failure to make a clear stand on behalf of the Rohingya minority". However, she has spoken out "against a ban on Rohingya families near the Bangladeshi border having more than two children". In a 2015 BBC News article, reporter Jonah Fisher suggested that Aung San Suu Kyi's silence over the Rohingya issue is due to a need to obtain support from the majority Bamar ethnicity as she is in "the middle of a general election campaign". In May 2015, the Dalai Lama publicly called upon her to do more to help the Rohingya in Myanmar, claiming that he had previously urged her to address the plight of the Rohingya in private during two separate meetings and that she had resisted his urging. In May 2016, Aung San Suu Kyi asked the newly appointed United States Ambassador to Myanmar, Scot Marciel, not to refer to the Rohingya by that name as they "are not recognized as among the 135 official ethnic groups" in Myanmar. This followed Bamar protests at Marciel's use of the word "Rohingya". In 2016, Aung San Suu Kyi was accused of failing to protect Myanmar's Rohingya Muslims during the Rohingya genocide. State crime experts from Queen Mary University of London warned that Aung San Suu Kyi is "legitimising genocide" in Myanmar. Despite continued persecution of the Rohingya well into 2017, Aung San Suu Kyi was "not even admitting, let alone trying to stop, the army's well-documented campaign of rape, murder and destruction against Rohingya villages". On 4 September 2017, Yanghee Lee, the UN's special rapporteur on human rights in Myanmar, criticised Aung San Suu Kyi's response to the "really grave" situation in Rakhine, saying: "The de facto leader needs to step in—that is what we would expect from any government, to protect everybody within their own jurisdiction." The BBC reported that "Her comments came as the number of Rohingya fleeing to Bangladesh reached 87,000, according to UN estimates", adding that "her sentiments were echoed by Nobel Peace laureate Malala Yousafzai, who said she was waiting to hear from Ms Suu Kyi—who has not commented on the crisis since it erupted". The next day George Monbiot, writing in The Guardian, called on readers to sign a change.org petition to have the Nobel peace prize revoked, criticising her silence on the matter and asserting "whether out of prejudice or out of fear, she denies to others the freedoms she rightly claimed for herself. Her regime excludes—and in some cases seeks to silence—the very activists who helped to ensure her own rights were recognised." The Nobel Foundation replied that there existed no provision for revoking a Nobel Prize. Archbishop Desmond Tutu, a fellow peace prize holder, also criticised Aung San Suu Kyi's silence: in an open letter published on social media, he said: "If the political price of your ascension to the highest office in Myanmar is your silence, the price is surely too steep ... It is incongruous for a symbol of righteousness to lead such a country." On 13 September it was revealed that Aung San Suu Kyi would not be attending a UN General Assembly debate being held the following week to discuss the humanitarian crisis, with a Myanmar government spokesman stating "perhaps she has more pressing matters to deal with". In October 2017, Oxford City Council announced that, following a unanimous cross-party vote, the honour of Freedom of the City, granted in 1997 in recognition of her "long struggle for democracy", was to be withdrawn following evidence emerging from the United Nations which meant that she was "no longer worthy of the honour". A few days later, Munsur Ali, a councillor for City of London Corporation, tabled a motion to rescind the Freedom of the City of London: the motion was supported by Catherine McGuinness, chair of the corporation's policy and resources committee, who expressed "distress ... at the situation in Burma and the atrocities committed by the Burmese military". On 13 November 2017, Bob Geldof returned his Freedom of the City of Dublin award in protest over Aung San Suu Kyi also holding the accolade, stating that he does not "wish to be associated in any way with an individual currently engaged in the mass ethnic cleansing of the Rohingya people of north-west Burma". Calling Aung San Suu Kyi a "handmaiden to genocide", Geldof added that he would take pride in his award being restored if it is first stripped from her. The Dublin City Council voted 59–2 (with one abstention) to revoke Aung San Suu Kyi's Freedom of the City award over Myanmar's treatment of the Rohingya people in December 2017, though Lord Mayor of Dublin Mícheál Mac Donncha denied the decision was influenced by protests by Geldof and members of U2. At the same meeting, the Councillors voted 37–7 (with 5 abstentions) to remove Geldof's name from the Roll of Honorary Freemen. In March 2018, the United States Holocaust Memorial Museum revoked Aung San Suu Kyi's Elie Wiesel Award, awarded in 2012, citing her failure "to condemn and stop the military's brutal campaign" against Rohingya Muslims. In May 2018, Aung San Suu Kyi was considered complicit in the crimes against Rohingyas in a report by Britain's International Development Committee. In August 2018, it was revealed that Aung San Suu Kyi would be stripped of her Freedom of Edinburgh award over her refusal to speak out against the crimes committed against the Rohingya. She had received the award in 2005 for promoting peace and democracy in Burma. This will be only the second time that anyone has ever been stripped of the award, after Charles Stewart Parnell lost it in 1890 due to a salacious affair. Also in August, a UN report, while describing the violence as genocide, added that Aung San Suu Kyi did as little as possible to prevent it. In early October 2018, both the Canadian Senate and its House of Commons voted unanimously to strip Aung San Suu Kyi of her honorary citizenship. This decision was caused by the Government of Canada's determination that the treatment of the Rohingya by Myanmar's government amounts to genocide. On 11 November 2018, Amnesty International announced it was revoking her Ambassador of Conscience award. In December 2019, Aung San Suu Kyi appeared in the International Court of Justice at The Hague where she defended the Burmese military against allegations of genocide against the Rohingya. In a speech of over 3,000 words, Aung San Suu Kyi did not use the term "Rohingya" in describing the ethnic group. She stated that the allegations of genocide were "incomplete and misleading", claiming that the situation was actually a Burmese military response to attacks by the Arakan Rohingya Salvation Army. She also questioned how there could be "genocidal intent" when the Burmese government had opened investigations and also encouraged Rohingya to return after being displaced. However, experts have largely criticized the Burmese investigations as insincere, with the military declaring itself innocent and the government preventing a visit from investigators from the United Nations. Many Rohingya have also not returned due to perceiving danger and a lack of rights in Myanmar. In January 2020, the International Court of Justice decided that there was a "real and imminent risk of irreparable prejudice to the rights" of the Rohingya. The court also took the view that the Burmese government's efforts to remedy the situation "do not appear sufficient" to protect the Rohingya. Therefore, the court ordered the Burmese government to take "all measures within its power" to protect the Rohingya from genocidal actions. The court also instructed the Burmese government to preserve evidence and report back to the court at timely intervals about the situation. Arrests and prosecution of journalists In December 2017, two Reuters journalists, Wa Lone and Kyaw Soe Oo, were arrested while investigating the Inn Din massacre of Rohingyas. Suu Kyi publicly commented in June 2018 that the journalists "weren't arrested for covering the Rakhine issue", but because they had broken Myanmar's Official Secrets Act. As the journalists were then on trial for violating the Official Secrets Act, Aung San Suu Kyi's presumption of their guilt was criticized by rights groups for potentially influencing the verdict. American diplomat Bill Richardson said that he had privately discussed the arrest with Suu Kyi, and that Aung San Suu Kyi reacted angrily and labelled the journalists "traitors". A police officer testified that he was ordered by superiors to use entrapment to frame and arrest the journalists; he was later jailed and his family evicted from their home in the police camp. The judge found the journalists guilty in September 2018 and to be jailed for seven years. Aung San Suu Kyi reacted to widespread international criticism of the verdict by stating: "I don't think anyone has bothered to read" the judgement as it had "nothing to do with freedom of expression at all", but the Official Secrets Act. She also challenged critics to "point out where there has been a miscarriage of justice", and told the two Reuters journalists that they could appeal their case to a higher court. In September 2018, the Office of the United Nations High Commissioner for Human Rights issued a report that since Aung San Suu Kyi's party, the NLD, came to power, the arrests and criminal prosecutions of journalists in Myanmar by the government and military, under laws which are too vague and broad, have "made it impossible for journalists to do their job without fear or favour." 2021 arrest and trial On 1 February 2021, Aung San Suu Kyi was arrested and deposed by the Myanmar military, along with other leaders of her National League for Democracy (NLD) party, after the Myanmar military declared the November 2020 general election results fraudulent. A 1 February court order authorized her detainment for 15 days, stating that soldiers searching her Naypyidaw villa had uncovered imported communications equipment lacking proper paperwork. Aung San Suu Kyi was transferred to house arrest on the same evening, and on 3 February was formally charged with illegally importing ten or more walkie-talkies. She faces up to three years in prison for the charges. According to The New York Times, the charge "echoed previous accusations of esoteric legal crimes (and) arcane offenses" used by the military against critics and rivals. As of 9 February, Aung San Suu Suu Kyi continues to be held incommunicado, without access to international observers or legal representation of her choice. US President Joe Biden raised the threat of new sanctions as a result of the Myanmar military coup. In a statement, the UN Secretary-General António Guterres believes "These developments represent a serious blow to democratic reforms in Myanmar." Volkan Bozkir, President of the UN General Assembly, also voiced his concerns, having tweeted "Attempts to undermine democracy and rule of law are unacceptable", and called for the "immediate release" of the detained NLD party leaders. On 1 April 2021, Aung San Suu Kyi was charged with the fifth offence in relation to violating the official secrets act. According to her lawyer, it is the most serious charge brought against her after the coup and could carry a sentence of up to 14 years in prison if convicted. On 12 April 2021, Aung San Suu Kyi was hit with another charge, this time "under section 25 of the natural disaster management law". According to her lawyer, it is her sixth indictment. She appeared in court via video link and now faces five charges in the capital Naypyidaw and one in Yangon. On 28 April 2021, the National Unity Government (NUG), in which Aung San Suu Kyi symbolically retained her position, anticipated that there would be no talks with the junta until all political prisoners, including her, are set free. This move by her supporters come after an ASEAN-supported consensus with the junta leadership in the past days. However, on 8 May 2021, the junta designated NUG as a terrorist organization and warned citizens not to cooperate, nor to give aid to the parallel government, stripping Aung San Suu Kyi of her symbolic position. On 10 May 2021, her lawyer said she would appear in court in person for the first time since her arrest after the Supreme Court ruled that she could attend in person and meet her lawyers. She had been previously only allowed to do so remotely from her home. On 21 May 2021, a military junta commission was formed to dissolve Aung San Suu Kyi's National League for Democracy (NLD) on grounds of election fraud in the November 2020 election. On 22 May 2021, during his first interview since the coup, junta leader Min Aung Hlaing reported that she was in good health at her home and that she would appear in court in a matter of days. On 23 May 2021, the European Union expressed support for Aung San Suu Kyi's party and condemned the commission aimed at dissolving the party, echoing the NLD's statement released earlier in the week. On 24 May 2021, Aung San Suu Kyi appeared in person in court for the first time since the coup to face the "incitement to sedition" charge against her. During the 30-minute hearing, she said that she was not fully aware of what was going on outside as she had no access to full information from the outside and refused to respond on the matters. She was also quoted on the possibility of her party’s forced dissolution as "Our party grew out of the people so it will exist as long as people support it." In her meeting with her lawyers, Aung San Suu Kyi also wished people "good health". On 2 June 2021, it was reported that the military had moved her (as well as Win Myint) from their homes to an unknown location. On 10 June 2021, Aung San Suu Kyi was charged with corruption, the most serious charge brought against her, which carries a maximum penalty of 15 years' imprisonment. Aung San Suu Kyi's lawyers say the charges are made to keep her out of the public eye. On 14 June 2021, the trial against Aung San Suu Kyi began. Any conviction would prevent her from running for office again. Aung San Suu Kyi's lawyers attempted to have prosecution testimony against her on the sedition charge disqualified but the motion was denied by the judge. On 13 September 2021, court proceedings were to resume against her, but it was postponed due to Aung San Suu Kyi presenting "minor health issues" that impeded her from attending the court in person. On 4 October 2021, Aung San Suu Kyi asked the judge to reduce her times of court appearances because of her fragile health. Aung San Suu Kyi described her health as "strained". In November, the Myanmar courts deferred the first verdicts in the trial without further explanation or giving dates. In the same month, she was again charged with corruption, related to the purchase and rental of a helicopter, bringing the total of charges to nearly a dozen. On 6 December 2021, Suu Kyi was sentenced to 4 years in jail. Suu Kyi, who is still facing multiple charges and further sentences, was sentenced on the charge of inciting dissent and violating COVID-19 protocols. Following a partial pardon by the chief of the military government, Aung San Suu Kyi's four-year sentence was reduced to two years' imprisonment. On 10 January 2022, the military court in Myanmar sentenced Suu Kyi to an additional four years in prison on a number of charges including "importing and owning walkie-talkies" and "breaking coronavirus rules". The trials, which are closed to the public, the media, and any observers, were described as a "courtroom circus of secret proceedings on bogus charges" by the deputy director for Asia of Human Rights Watch. On 27 April 2022, Aung San Suu Kyi was sentenced to five years in jail on corruption charges. On 22 June 2022, junta authorities ordered that all further legal proceedings against Suu Kyi will take place in prison venues, instead of a courtroom. No explanation of the decision was given. Citing unidentified sources, the BBC reported that Suu Kyi was also moved on 22 June from house arrest, where she had had close companions, to solitary confinement in a specially-built area inside a prison in Nay Pyi Taw. This is the same prison in which Win Myint had similarly been placed in solitary confinement. The military confirmed that Suu Kyi had been moved to prison. On 15 August 2022, sources following Aung San Suu Kyi's court proceedings said that she was sentenced to an additional six years' imprisonment after being found guilty on four corruption charges, bringing her overall sentences to 17 years in prison. In September 2022, she was convicted of election fraud and breaching the state's secrets act and sentenced to a total of six years in prison for both convictions, increasing her overall sentence to 23 years in prison. By 12 October 2022, she had been sentenced to 26 years imprisonment on ten charges in total, including five corruption charges. On 30 December 2022, her trials ended with another conviction and an additional sentence of seven years' imprisonment for corruption. Aung San Suu Kyi's final sentence is of 33 years in prison. On 12 July 2023, Thailand's foreign minister Don Pramudwinai said at the ASEAN Foreign Ministers' Meeting in Jakarta that he met with Aung San Suu Kyi during his visit to Myanmar. On 1 August 2023, the military junta granted Suu Kyi a partial pardon, reducing her sentence to a total of 27 years in prison. Prior to the pardon, she was moved from prison to a VIP government residence, according to an official from NLD party. However, it was reported that since the beginning of September 2023, she is back in prison. The exact time when she was sent back to prison is unknown. Since January, Aung San Suu Kyi and her lawyers are trying to get six corruption charges overturned. To this date, the requests are repeatedly denied. Political beliefs Asked what democratic models Myanmar could look to, she said: "We have many, many lessons to learn from various places, not just the Asian countries like South Korea, Taiwan, Mongolia, and Indonesia." She also cited "eastern Europe and countries, which made the transition from communist autocracy to democracy in the 1980s and 1990s, and the Latin American countries, which made the transition from military governments. And we cannot of course forget South Africa, because although it wasn't a military regime, it was certainly an authoritarian regime." She added: "We wish to learn from everybody who has achieved a transition to democracy, and also ... our great strong point is that, because we are so far behind everybody else, we can also learn which mistakes we should avoid." In a nod to the deep US political divide between Republicans led by Mitt Romney and the Democrats by Obama—then battling to win the 2012 presidential election—she stressed, "Those of you who are familiar with American politics I'm sure understand the need for negotiated compromise." Related organisations Freedom Now, a Washington, D.C.-based non-profit organisation, was retained in 2006 by a member of her family to help secure Aung San Suu Kyi's release from house arrest. The organisation secured several opinions from the UN Working Group on Arbitrary Detention that her detention was in violation of international law; engaged in political advocacy such as spearheading a letter from 112 former Presidents and Prime Ministers to UN Secretary-General Ban Ki-moon urging him to go to Burma to seek her release, which he did six weeks later; and published numerous op-eds and spoke widely to the media about her ongoing detention. Its representation of her ended when she was released from house arrest on 13 November 2010. Aung San Suu Kyi has been an honorary board member of International IDEA and ARTICLE 19 since her detention, and has received support from these organisations. The Vrije Universiteit Brussel and the University of Louvain (UCLouvain), both located in Belgium, granted her the title of Doctor Honoris Causa. In 2003, the Freedom Forum recognised Aung San Suu Kyi's efforts to promote democracy peacefully with the Al Neuharth Free Spirit of the Year Award, in which she was presented over satellite because she was under house arrest. She was awarded one million dollars. In June of each year, the U.S. Campaign for Burma organises hundreds of "Arrest Yourself" house parties around the world in support of Aung San Suu Kyi. At these parties, the organisers keep themselves under house arrest for 24 hours, invite their friends, and learn more about Burma and Aung San Suu Kyi. The Freedom Campaign, a joint effort between the Human Rights Action Center and US Campaign for Burma, looks to raise worldwide attention to the struggles of Aung San Suu Kyi and the people of Burma. The Burma Campaign UK is a UK-based NGO (Non-Governmental Organisation) that aims to raise awareness of Burma's struggles and follow the guidelines established by the NLD and Aung San Suu Kyi. St Hugh's College, Oxford, where she studied, had a Burmese theme for their annual ball in support of her in 2006. The university later awarded her an honorary doctorate in civil law on 20 June 2012 during her visit to her alma mater. Aung San Suu Kyi is the official patron of The Rafto Human Rights House in Bergen, Norway. She received the Thorolf Rafto Memorial Prize in 1990. She was made an honorary free person of the City of Dublin, Ireland in November 1999, although a space had been left on the roll of signatures to symbolize her continued detention. This was subsequently revoked on 13 December 2017. In November 2005 the human rights group Equality Now proposed Aung Sun Suu Kyi as a potential candidate, among other qualifying women, for the position of U.N. Secretary General. In the proposed list of qualified women Aung San Suu Kyi was recognised by Equality Now as the Prime Minister-Elect of Burma. The UN' special envoy to Myanmar, Ibrahim Gambari, met Aung San Suu Kyi on 10 March 2008 before wrapping up his trip to the military-ruled country. Aung San Suu Kyi was an honorary member of The Elders, a group of eminent global leaders brought together by Nelson Mandela. Her ongoing detention meant that she was unable to take an active role in the group, so The Elders placed an empty chair for her at their meetings. The Elders have consistently called for the release of all political prisoners in Burma. Upon her election to parliament, she stepped down from her post. In 2010, Aung San Suu Kyi was given an honorary doctorate from the University of Johannesburg. In 2011, Aung San Suu Kyi was named the Guest Director of the 45th Brighton Festival. She was part of the international jury of Human Rights Defenders and Personalities who helped to choose a universal Logo for Human Rights in 2011. In June 2011, the BBC announced that Aung San Suu Kyi was to deliver the 2011 Reith Lectures. The BBC covertly recorded two lectures with Aung San Suu Kyi in Burma, which were then smuggled out of the country and brought back to London. The lectures were broadcast on BBC Radio 4 and the BBC World Service on 28 June 2011 and 5 July 2011. 8 March 2012, Canadian Foreign Affairs Minister John Baird presented Aung San Suu Kyi a certificate of honorary Canadian citizenship and an informal invitation to visit Canada. The honorary citizenship was revoked in September 2018 due to the Rohingya conflict. In April 2012, British Prime Minister David Cameron became the first leader of a major world power to visit Aung San Suu Kyi and the first British prime minister to visit Burma since the 1950s. In his visit, Cameron invited Aung San Suu Kyi to Britain where she would be able to visit her 'beloved' Oxford, an invitation which she later accepted. She visited Britain on 19 June 2012. In 2012 she received the Honorary degree of Doctor of Civil Law from the University of Oxford. In May 2012, Aung San Suu Kyi received the inaugural Václav Havel Prize for Creative Dissent of the Human Rights Foundation. 29 May 2012 PM Manmohan Singh of India visited Aung San Suu Kyi. In his visit, PM invited Aung San Suu Kyi to India as well. She started her six-day visit to India on 16 November 2012, where among the places she visited was her alma mater Lady Shri Ram College in New Delhi. In 2012, Aung San Suu Kyi set up the charity Daw Khin Kyi Foundation to improve health, education and living standards in underdeveloped parts of Myanmar. The charity was named after Aung San Suu Kyi's mother. Htin Kyaw played a leadership role in the charity before his election as President of Myanmar. The charity runs a Hospitality and Catering Training Academy in Kawhmu Township, in Yangon Region, and runs a mobile library service which in 2014 had 8000 members. Seoul National University in South Korea conferred an honorary doctorate degree to Aung San Suu Kyi in February 2013. University of Bologna, Italy conferred an honorary doctorate degree in philosophy to Aung San Suu Kyi in October 2013. Monash University, The Australian National University, University of Sydney and University of Technology, Sydney conferred an honorary degree to Aung San Suu Kyi in November 2013. In popular culture The life of Aung San Suu Kyi and her husband Michael Aris is portrayed in Luc Besson's 2011 film The Lady, in which they are played by Michelle Yeoh and David Thewlis. Yeoh visited Aung San Suu Kyi in 2011 before the film's release in November. In the John Boorman's 1995 film Beyond Rangoon, Aung San Suu Kyi was played by Adelle Lutz. Irish songwriters Damien Rice and Lisa Hannigan released in 2005 the single "Unplayed Piano", in support of the Free Aung San Suu Kyi 60th Birthday Campaign that was happening at the time. U2's Bono wrote the song "Walk On" in tribute to Aung San Suu Kyi (and wore a shirt with her name and image upon it), and he publicized her plight during the U2 360° Tour, 2009–2011. Saxophonist Wayne Shorter composed a song titled "Aung San Suu Kyi". It appears on his albums 1+1 (with pianist Herbie Hancock) and Footprints Live!. Health problems Aung San Suu Kyi underwent surgery for a gynecological condition in September 2003 at Asia Royal Hospital during her house arrest. She also underwent minor foot surgery in December 2013 and eye surgery in April 2016. In June 2012, her doctor Tin Myo Win said that she had no serious health problems, but weighed only , had low blood pressure, and could become weak easily. After being arrested and detained on 1 February 2021, there were concerns that Aung San Suu Kyi's health is deteriorating. However, according to military's spokesperson Zaw Min Tun, special attention is given to her health and living condition. Don Pramudwinai also said that "she was in good health, both physically and mentally". Although a junta spokesperson claimed that she is in good health, since being sent back to prison in September 2023, it is reported that her health condition is worsening and "suffering a series of toothache and unable to eat". Her request to see a dentist had been denied. Her son is urging the junta to allow Aung San Suu Kyi to receive medical assistance. Books Freedom from Fear (1991) Letters from Burma (1991) Let's Visit Nepal (1985) (ISBN 978-0222009814) Honours List of honours of Aung San Suu Kyi See also List of civil rights leaders List of Nobel laureates affiliated with Kyoto University State Counsellor of Myanmar List of foreign ministers in 2017 List of current foreign ministers Notes References Bibliography Miller, J. E. (2001). Who's Who in Contemporary Women's Writing. Routledge. Reid, R., Grosberg, M. (2005). Myanmar (Burma). Lonely Planet. . Stewart, Whitney (1997). Aung San Suu Kyi: Fearless Voice of Burma. Twenty-First Century Books. . Further reading Combs, Daniel. Until the World Shatters: Truth, Lies, and the Looting of Myanmar (2021). Aung San Suu Kyi (Modern Peacemakers) (2007) by Judy L. Hasday, The Lady: Aung San Suu Kyi: Nobel Laureate and Burma's Prisoner (2002) by Barbara Victor, , or 1998 hardcover: The Lady and the Peacock: The Life of Aung San Suu Kyi (2012) by Peter Popham, Perfect Hostage: A Life of Aung San Suu Kyi (2007) by Justin Wintle, Tyrants: The World's 20 Worst Living Dictators (2006) by David Wallechinsky, Aung San Suu Kyi (Trailblazers of the Modern World) (2004) by William Thomas, No Logo: No Space, No Choice, No Jobs (2002) by Naomi Klein Mental culture in Burmese crisis politics: Aung San Suu Kyi and the National League for Democracy (ILCAA Study of Languages and Cultures of Asia and Africa Monograph Series) (1999) by Gustaaf Houtman, Aung San Suu Kyi: Standing Up for Democracy in Burma (Women Changing the World) (1998) by Bettina Ling Prisoner for Peace: Aung San Suu Kyi and Burma's Struggle for Democracy (Champions of Freedom Series) (1994) by John Parenteau, Des femmes prix Nobel de Marie Curie à Aung San Suu Kyi, 1903–1991 (1992) by Charlotte Kerner, Nicole Casanova, Gidske Anderson, Aung San Suu Kyi, towards a new freedom (1998) by Chin Geok Ang Aung San Suu Kyi's struggle: Its principles and strategy (1997) by Mikio Oishi Finding George Orwell in Burma (2004) by Emma Larkin Character Is Destiny: Inspiring Stories Every Young Person Should Know and Every Adult Should Remember (2005) by John McCain, Mark Salter. Random House Under the Dragon: A Journey Through Burma (1998/2010) by Rory MacLean External links Aung San Suu Kyi's website (Site appears to be inactive. Last posting was in July 2014) |- |- |- |- |- |- |- |- |- |- |- |- |- Prime Ministers of Myanmar 21st-century women prime ministers 1945 births 20th-century Burmese women writers 20th-century Burmese writers 21st-century Burmese politicians 21st-century Burmese women politicians 21st-century Burmese women writers 21st-century Burmese writers Alumni of SOAS University of London Alumni of St Hugh's College, Oxford Amnesty International prisoners of conscience held by Myanmar Buddhist pacifists Burmese activists Burmese democracy activists Burmese human rights activists Burmese Nobel laureates Burmese pacifists Burmese prisoners and detainees Burmese revolutionaries Burmese socialists Burmese Theravada Buddhists Burmese women activists Burmese women diplomats Burmese women in politics Civil rights activists Congressional Gold Medal recipients Family of Aung San Fellows of St Hugh's College, Oxford Fellows of the Royal College of Surgeons of Edinburgh Female foreign ministers Female heads of government Foreign ministers of Myanmar Gandhians Heads of government who were later imprisoned Honorary Companions of the Order of Australia International Simón Bolívar Prize recipients Lady Shri Ram College alumni Leaders ousted by a coup Living people Members of Pyithu Hluttaw National League for Democracy politicians Nobel Peace Prize laureates Nonviolence advocates Olof Palme Prize laureates Activists from Yangon Politicians from Yangon People stripped of honorary degrees Presidential Medal of Freedom recipients Prisoners and detainees of Myanmar Recipients of the Four Freedoms Award Sakharov Prize laureates Women civil rights activists Women government ministers of Myanmar Women Nobel laureates Women opposition leaders
2856
https://en.wikipedia.org/wiki/Latin%20American%20Integration%20Association
Latin American Integration Association
The Latin American Integration Association / Asociación Latinoamericana de Integración / Associação Latino-Americana de Integração (LAIA / ALADI) is an international and regional scope organization. It was created on 12 August 1980 by the 1980 Montevideo Treaty, replacing the Latin American Free Trade Association (LAFTA/ALALC). Currently, it has 13 member countries, and any of the Latin American States may apply for accession. Objectives The development of the integration process developed within the framework of the ALADI aims at promoting the harmonious and balanced socio-economic development of the region, and its long-term objective is the gradual and progressive establishment of a Latin-American single market. Basic functions Promotion and regulation of reciprocal trade Economic complementation Development of economic cooperation actions contributing to the markets extension. General principles Pluralism in political and economic matters; Progressive convergence of partial actions for the establishment of a Latin-American Common Market; Flexibility; Differential treatments based on the development level of the member countries; and Multiple forms of trade agreements. Integration mechanisms The ALADI promotes the establishment of an area of economic preferences within the region, in order to create a Latin-American common market, through three mechanisms: A Regional Tariff Preference applied to goods from the member countries compared to tariffs in-force for third countries. Regional Scope Agreements, those in which all member countries participate. Partial Scope Agreements, those wherein two or more countries of the area participate. The Relatively Less Economically Developed Countries of the region (Bolivia, Ecuador and Paraguay) benefit from a preferential system, through the lists of markets opening offered by the countries in favor of the Relatively Less Economically Developed Countries; special programs of cooperation (business rounds, pre-investment, financing, technological support); and countervailing measures in favor of the land-locked countries, the full participation of such countries in the integration process is sought. The ALADI includes in its legal structure the strongest sub-regional, plurilateral and bilateral integration agreements arising in growing numbers in the continent. As a result, the ALADI – as an institutional and legal framework or “umbrella” of the regional integration- develops actions in order to support and foster these efforts for the progressive establishment of a common economic space. Member states Accession of other Latin American countries The 1980 Montevideo Treaty is open to the accession of any Latin-American country. On 26 August 1999, the first accession to the 1980 Montevideo Treaty was executed, with the incorporation of the Republic of Cuba as a member country of the ALADI. On 10 May 2012, the Republic of Panama became the thirteenth member country of the ALADI. Likewise, the accession of the Republic of Nicaragua was accepted in the Sixteenth Meeting of the Council of Ministers (Resolution 75 (XVI)), held on 11 August 2011. Currently, Nicaragua moves towards the fulfillment of conditions for becoming a member country of the ALADI. The ALADI opens its field of actions for the rest of Latin America through multilateral links or partial agreements with other countries and integration areas of the continent (Article 25). The Latin-American Integration Association also contemplates the horizontal cooperation with other integration movements in the world and partial actions with third developing countries or their respective integration areas (Article 27). Institutional structure Council of Ministers of Foreign Affairs The Council of Ministers is the supreme body of the ALADI, and adopts the decisions for the superior political management of the integration process. It is constituted by the Ministers of Foreign Affairs of the member countries. Notwithstanding, when one of such member countries assigns the competence of the integration affairs to a different Minister or Secretary of State, the member countries may be represented, with full powers, by the respective Minister or Secretary. It is convened by the Committee of Representatives, meets and makes decisions with the presence of all the member countries. Evaluation and Convergence Conference It is in charge, among others, of analyzing the functioning of the integration process in all its aspects, promoting the convergence of the partial scope agreements seeking their progressive multilateralization, and promoting greater scope actions as regards economic integration. It is made up of Plenipotentiaries of the member countries. Committee of Representatives It is the permanent political body and negotiating forum of the ALADI, where all the initiatives for the fulfillment of the objectives established by the 1980 Montevideo Treaty are analyzed and agreed on. It is composed of a Permanent Representative of each member country with right to one vote and an Alternate Representative. It meets regularly every 15 days and its Resolutions are adopted by the affirmative vote of two thirds of the member countries. General Secretariat It is the technical body of the ALADI, and it may propose, evaluate, study and manage for the fulfillment of the objectives of the ALADI. It is composed of technical and administrative personnel, and directed by a Secretary-General, who has the support of two Undersecretaries, elected for a three-year period, renewable for the same term. Secretaries general 1980–1984 Julio César Schupp (Paraguay) 1984–1987 Juan José Real (Uruguay) 1987–1990 Norberto Bertaina (Argentina) 1990–1993 Jorge Luis Ordóñez (Colombia) 1993–1999 Antônio José de Cerqueira Antunes (Brasil) 2000–2005 Juan Francisco Rojas Penso (Venezuela) 2005–2008 Didier Opertti (Uruguay) 2008–2009 Bernardino Hugo Saguier-Caballero (Paraguay) 2009–2011 José Félix Fernández Estigarribia (Paraguay) 2011–2017 Carlos Álvarez (Argentina) 2017– Alejandro de la Peña Navarrete (Mexico) See also Free trade Free trade area International trade Central America Free Trade Agreement Free Trade Area of the Americas Latin American economy Trade bloc Mercosur Andean Community of Nations Union of South American Nations Central American Integration System Caribbean Community Latin American Economic System Latin American Parliament PetroCaribe References Latin America Trade blocs United Nations General Assembly observers Organizations based in Montevideo Organizations established in 1980 Palermo, Montevideo International organizations based in the Americas
2858
https://en.wikipedia.org/wiki/Aircraft%20spotting
Aircraft spotting
Aircraft spotting, or planespotting, is a hobby consisting of tracking the movement of aircraft, which is usually accomplished by photography or videography. Besides monitoring aircraft, planespotting enthusiasts (who are usually called planespotters) also record information regarding airports, air traffic control communications, airline routes, and more. History and evolution Aviation enthusiasts have been watching airplanes and other aircraft since aviation began. However, as a hobby (distinct from active/wartime work), planespotting did not appear until the second half of the 20th century. During World War II and the subsequent Cold War some countries encouraged their citizens to become "planespotters" in an "observation corps" or similar public body for reasons of public security. Britain had the Royal Observer Corps which operated between 1925 and 1995. A journal called The Aeroplane Spotter was published in January 1940. The publication included a glossary that was refined in 2010 and published online. The development of technology and global resources enabled a revolution in plane-spotting. Point and shoot cameras, DSLRs & walkie talkies significantly changed the hobby. With the help of the internet, websites such as FlightAware and Flightradar24 have made it possible for spotters to track and locate specific aircraft from all across the world. Websites specifically for aircraft, such as airliners.net, and social networking services, such as Twitter, Facebook and Instagram, allow spotters to record their sightings and upload their photos or see pictures of aircraft spotted by other people worldwide. Techniques When spotting aircraft, observers generally notice the key attributes of an aircraft, such as a distinctive noise from its engine, the number of contrails it is producing, or its callsign. Observers can also assess the size of the aircraft and the number, type, and position of its engines. Another distinctive attribute is the position of wings relative to the fuselage and the degree to which they are swept rearwards. The wings may be above the fuselage, below it, or fixed at midpoint. The number of wings indicates whether it is a monoplane, biplane or triplane. The position of the tailplane relative to the fin(s) and the shape of the fin are other attributes. The configuration of the landing gear can be distinctive, as well as the size and shape of the cockpit and passenger windows along with the layout of emergency exits and doors. Other features include the speed, cockpit placement, colour scheme or special equipment that changes the silhouette of the aircraft. Taken together these traits will enable the identification of an aircraft. If the observer is familiar with the airfield being used by the aircraft and its normal traffic patterns, he or she is more likely to leap quickly to a decision about the aircraft's identity – they may have seen the same type of aircraft from the same angle many times. This is particularly prevalent if the aircraft spotter is spotting commercial aircraft, operated by airlines that have a limited fleet. Spotters use equipment such as ADS-B decoders to track the movements of aircraft. The two most famous devices used are the AirNav Systems RadarBox and Kinetic Avionics SBS series. Both of them read and process the radar data and show the movements on a computer screen. Another tool that spotters can use are apps such as FlightRadar24 or Flightaware, where they can look at arrival and departure schedules and track the location of aircraft that have their transponder on. Most of the decoders also allow the exporting of logs from a certain route or airport. Spotting styles Some spotters will note and compile the markings, a national insignia or airline livery or logo, a squadron badge or code letters in the case of a military aircraft. Published manuals allow more information to be deduced, such as the delivery date or the manufacturer's construction number. Camouflage markings differ, depending on the surroundings in which that aircraft is expected to operate. In general, most spotters attempt to see as many aircraft of a given type, a particular airline, or a particular subset of aircraft such as business jets, commercial airliners, military and/or general aviation aircraft. Some spotters attempt to see every airframe and are known as "frame spotters." Others are keen to see every registration worn by each aircraft. Ancillary activities might include listening-in to air traffic control transmissions (using radio scanners, where that is legal), liaising with other "spotters" to clear up uncertainties as to what aircraft have been seen at specific times or in particular places. Several internet mailing list groups have been formed to help communicate aircraft seen at airports, queries and anomalies. These groups can cater to certain regions, certain aircraft types, or may appeal to a wider audience. The result is that information on aircraft movements can be delivered worldwide in a real-time fashion to spotters. The hobbyist might travel long distances to visit different airports, to see an unusual aircraft, or to view the remains of aircraft withdrawn from use. Air shows usually draw large numbers of spotters as they are opportunities to enter airfields and air bases worldwide that are usually closed to the public and to see displayed aircraft at close range. Some aircraft may be placed in the care of museums (see Aviation archaeology) – or perhaps be cannibalized in order to repair a similar aircraft already preserved. Aircraft registrations can be found in books, with online resources, or in monthly magazines from enthusiast groups. Most spotters maintained books of different aircraft fleets and would underline or check each aircraft seen. Each year, a revised version of the books would be published and the spotter would need to re-underline every aircraft seen. With the development of commercial aircraft databases spotters were finally able to record their sightings in an electronic database and produce reports that emulated the underlined books. Legal ramifications The legal repercussions of the hobby were dramatically shown in November 2001 when fourteen aircraft spotters (twelve British, two Dutch) were arrested by Greek police after being observed at an open day at the Greek Air Force base at Kalamata. They were charged with espionage and faced a possible 20-year prison sentence if found guilty. After being held for six weeks, they were eventually released on $11,696 (£9,000) bail, and the charges reduced to the misdemeanor charge of illegal information collection. Confident of their innocence they returned for their trial in April, 2002 and were stunned to be found guilty, with eight of the group sentenced to three years, the rest for one year. At their appeal a year later, all were acquitted. As airport watch groups In the wake of the targeting of airports by terrorists, enthusiasts' organisations and police in the UK have cooperated in creating a code of conduct for planespotters, in a similar vein to guidelines devised for train spotters. By asking enthusiasts to contact police if spotters believe they see or hear something suspicious, this is an attempt to allow enthusiasts to continue their hobby while increasing security around airports. Birmingham and Stansted pioneered this approach in Britain and prior to the 2012 London Olympics, RAF Northolt introduced a Flightwatch scheme based on the same cooperative principles. These changes are also being made abroad in countries such as Australia, where aviation enthusiasts are reporting suspicious or malicious actions to police. The organisation of such groups has now been echoed in parts of North America. For example, the Bensenville, Illinois police department have sponsored an Airport Watch group at the Chicago O'Hare Airport. Members are issued identification cards and given training to accurately record and report unusual activities around the airport perimeter. (Members are not permitted airside.) Meetings are attended and supported by the FBI, Chicago Department of Aviation and the TSA who also provide regular training to group members. The Bensenville program was modeled on similar programs in Toronto, Ottawa and Minneapolis. In 2009, a similar airport watch group was organized between airport security and local aircraft spotters at Montréal–Pierre Elliott Trudeau International Airport. As of 2016, the group has 46 members and a special phone number to use to contact police if suspicious activity is seen around the airport area. Extraordinary rendition Following the events of 9/11, information collected by planespotters helped uncover what is known as extraordinary rendition by the CIA. Information on unusual movements of rendition aircraft provided data that was mapped by critical geographers such as Trevor Paglen and the Institute for Applied Autonomy. These data and maps led first to news reports and then to a number of governmental and inter-governmental investigations. See also Bus spotting Car spotting Train spotting Satellite watching References External links SpottersWiki: The Ultimate Airport Spotting Guide Airport Spotting Websites & Resources Spotter Guide JetPhotos (part of the Flightradar24) Planespotters.net Spotters.Aero (Ukrainian Spotter's Site) Aviation photography Observation hobbies
2861
https://en.wikipedia.org/wiki/Advertising
Advertising
Advertising is the practice and techniques employed to bring attention to a product or service. Advertising aims to put a product or service in the spotlight in hopes of drawing it attention from consumers. It is typically used to promote a specific good or service, but there are wide range of uses, the most common being the commercial advertisement. Commercial advertisements often seek to generate increased consumption of their products or services through "branding", which associates a product name or image with certain qualities in the minds of consumers. On the other hand, ads that intend to elicit an immediate sale are known as direct-response advertising. Non-commercial entities that advertise more than consumer products or services include political parties, interest groups, religious organizations and governmental agencies. Non-profit organizations may use free modes of persuasion, such as a public service announcement. Advertising may also help to reassure employees or shareholders that a company is viable or successful. In the 19th century, soap businesses were among the first to employ large-scale advertising campaigns. Thomas J. Barratt was hired by Pears to be its brand manager—the first of its kind—and in addition to creating slogans and images he recruited West End stage actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product. Modern advertising originated with the techniques introduced with tobacco advertising in the 1920s, most significantly with the campaigns of Edward Bernays, considered the founder of modern, "Madison Avenue" advertising. Worldwide spending on advertising in 2015 amounted to an estimated . Advertising's projected distribution for 2017 was 40.4% on TV, 33.3% on digital, 9% on newspapers, 6.9% on magazines, 5.8% on outdoor and 4.3% on radio. Internationally, the largest ("Big Five") advertising agency groups are Omnicom, WPP, Publicis, Interpublic, and Dentsu. In Latin, advertere means "to turn towards". History Egyptians used papyrus to make sales messages and wall posters. Commercial messages and political campaign displays have been found in the ruins of Pompeii and ancient Arabia. Lost and found advertising on papyrus was common in ancient Greece and ancient Rome. Wall or rock painting for commercial advertising is another manifestation of an ancient advertising form, which is present to this day in many parts of Asia, Africa, and South America. The tradition of wall painting can be traced back to Indian rock art paintings that date back to 4000 BC. In ancient China, the earliest advertising known was oral, as recorded in the Classic of Poetry (11th to 7th centuries BC) of bamboo flutes played to sell confectionery. Advertisement usually takes in the form of calligraphic signboards and inked papers. A copper printing plate dated back to the Song dynasty used to print posters in the form of a square sheet of paper with a rabbit logo with "Jinan Liu's Fine Needle Shop" and "We buy high-quality steel rods and make fine-quality needles, to be ready for use at home in no time" written above and below is considered the world's earliest identified printed advertising medium. In Europe, as the towns and cities of the Middle Ages began to grow, and the general population was unable to read, instead of signs that read "cobbler", "miller", "tailor", or "blacksmith", images associated with their trade would be used such as a boot, a suit, a hat, a clock, a diamond, a horseshoe, a candle or even a bag of flour. Fruits and vegetables were sold in the city square from the backs of carts and wagons and their proprietors used street callers (town criers) to announce their whereabouts. The first compilation of such advertisements was gathered in "Les Crieries de Paris", a thirteenth-century poem by Guillaume de la Villeneuve. 18th-19th century: Newspaper Advertising In the 18th century advertisements started to appear in weekly newspapers in England. These early print advertisements were used mainly to promote books and newspapers, which became increasingly affordable with advances in the printing press; and medicines, which were increasingly sought after. However, false advertising and so-called "quack" advertisements became a problem, which ushered in the regulation of advertising content. In the United States, newspapers grew quickly in the first few decades of the 19th century, in part due to advertising. By 1822, the United States had more newspaper readers than any other country. About half of the content of these newspapers consisted of advertising, usually local advertising, with half of the daily newspapers in the 1810s using the word "advertiser" in their name. In June 1836, French newspaper La Presse was the first to include paid advertising in its pages, allowing it to lower its price, extend its readership and increase its profitability and the formula was soon copied by all titles. Around 1840, Volney B. Palmer established the roots of the modern day advertising agency in Philadelphia. In 1842 Palmer bought large amounts of space in various newspapers at a discounted rate then resold the space at higher rates to advertisers. The actual ad – the copy, layout, and artwork – was still prepared by the company wishing to advertise; in effect, Palmer was a space broker. The situation changed when the first full-service advertising agency of N.W. Ayer & Son was founded in 1869 in Philadelphia. Ayer & Son offered to plan, create, and execute complete advertising campaigns for its customers. By 1900 the advertising agency had become the focal point of creative planning, and advertising was firmly established as a profession. Around the same time, in France, Charles-Louis Havas extended the services of his news agency, Havas to include advertisement brokerage, making it the first French group to organize. At first, agencies were brokers for advertisement space in newspapers. Late 19th century: Modern Advertising Thomas J. Barratt of London has been called "the father of modern advertising". Working for the Pears soap company, Barratt created an effective advertising campaign for the company products, which involved the use of targeted slogans, images and phrases. One of his slogans, "Good morning. Have you used Pears' soap?" was famous in its day and into the 20th century. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears, making her the first celebrity to endorse a commercial product. Becoming the company's brand manager in 1865, listed as the first of its kind by the Guinness Book of Records, Barratt introduced many of the crucial ideas that lie behind successful advertising and these were widely circulated in his day. He constantly stressed the importance of a strong and exclusive brand image for Pears and of emphasizing the product's availability through saturation campaigns. He also understood the importance of constantly reevaluating the market for changing tastes and mores, stating in 1907 that "tastes change, fashions change, and the advertiser has to change with them. An idea that was effective a generation ago would fall flat, stale, and unprofitable if presented to the public today. Not that the idea of today is always better than the older idea, but it is different – it hits the present taste." Enhanced advertising revenues was one effect of the Industrial Revolution in Britain. Thanks to the revolution and the consumers it created, by the mid-19th century biscuits and chocolate became products for the masses, and British biscuit manufacturers were among the first to introduce branding to distinguish grocery products. One the world's first global brands, Huntley & Palmers biscuits were sold in 172 countries in 1900, and their global reach was reflected in their advertisements. 20th century As a result of massive industrialization, advertising increased dramatically in the United States. In 1919 it was 2.5 percent of gross domestic product (GDP) in the US, and it averaged 2.2 percent of GDP between then and at least 2007, though it may have declined dramatically since the Great Recession. Industry could not benefit from its increased productivity without a substantial increase in consumer spending. This contributed to the development of mass marketing designed to influence the population's economic behavior on a larger scale. In the 1910s and 1920s, advertisers in the U.S. adopted the doctrine that human instincts could be targeted and harnessed – "sublimated" into the desire to purchase commodities. Edward Bernays, a nephew of Sigmund Freud, became associated with the method and is sometimes called the founder of modern advertising and public relations. Bernays claimed that:In other words, selling products by appealing to the rational minds of customers (the main method used prior to Bernays) was much less effective than selling products based on the unconscious desires that Bernays felt were the true motivators of human action. "Sex sells" became a controversial issue, with techniques for titillating and enlarging the audience posing a challenge to conventional morality. In the 1920s, under Secretary of Commerce Herbert Hoover, the American government promoted advertising. Hoover himself delivered an address to the Associated Advertising Clubs of the World in 1925 called 'Advertising Is a Vital Force in Our National Life." In October 1929, the head of the U.S. Bureau of Foreign and Domestic Commerce, Julius Klein, stated "Advertising is the key to world prosperity." This was part of the "unparalleled" collaboration between business and government in the 1920s, according to a 1933 European economic journal. The tobacco companies became major advertisers in order to sell packaged cigarettes. The tobacco companies pioneered the new advertising techniques when they hired Bernays to create positive associations with tobacco smoking. Advertising was also used as a vehicle for cultural assimilation, encouraging workers to exchange their traditional habits and community structure in favor of a shared "modern" lifestyle. An important tool for influencing immigrant workers was the American Association of Foreign Language Newspapers (AAFLN). The AAFLN was primarily an advertising agency but also gained heavily centralized control over much of the immigrant press. At the turn of the 20th century, advertising was one of the few career choices for women. Since women were responsible for most household purchasing done, advertisers and agencies recognized the value of women's insight during the creative process. In fact, the first American advertising to use a sexual sell was created by a woman – for a soap product. Although tame by today's standards, the advertisement featured a couple with the message "A skin you love to touch". In the 1920s, psychologists Walter D. Scott and John B. Watson contributed applied psychological theory to the field of advertising. Scott said, "Man has been called the reasoning animal but he could with greater truthfulness be called the creature of suggestion. He is reasonable, but he is to a greater extent suggestible". He demonstrated this through his advertising technique of a direct command to the consumer. Radio from the 1920s In the early 1920s, the first radio stations were established by radio equipment manufacturers, followed by non-profit organizations such as schools, clubs and civic groups who also set up their own stations. Retailer and consumer goods manufacturers quickly recognized radio's potential to reach consumers in their home and soon adopted advertising techniques that would allow their messages to stand out; slogans, mascots, and jingles began to appear on radio in the 1920s and early television in the 1930s. The rise of mass media communications allowed manufacturers of branded goods to bypass retailers by advertising directly to consumers. This was a major paradigm shift which forced manufacturers to focus on the brand and stimulated the need for superior insights into consumer purchasing, consumption and usage behaviour; their needs, wants and aspirations. The earliest radio drama series were sponsored by soap manufacturers and the genre became known as a soap opera. Before long, radio station owners realized they could increase advertising revenue by selling 'air-time' in small time allocations which could be sold to multiple businesses. By the 1930s, these advertising spots, as the packets of time became known, were being sold by the station's geographical sales representatives, ushering in an era of national radio advertising. By the 1940s, manufacturers began to recognize the way in which consumers were developing personal relationships with their brands in a social/psychological/anthropological sense. Advertisers began to use motivational research and consumer research to gather insights into consumer purchasing. Strong branded campaigns for Chrysler and Exxon/Esso, using insights drawn research methods from psychology and cultural anthropology, led to some of the most enduring campaigns of the 20th century. Commercial television in the 1950s In the early 1950s, the DuMont Television Network began the modern practice of selling advertisement time to multiple sponsors. Previously, DuMont had trouble finding sponsors for many of their programs and compensated by selling smaller blocks of advertising time to several businesses. This eventually became the standard for the commercial television industry in the United States. However, it was still a common practice to have single sponsor shows, such as The United States Steel Hour. In some instances the sponsors exercised great control over the content of the show – up to and including having one's advertising agency actually writing the show. The single sponsor model is much less prevalent now, a notable exception being the Hallmark Hall of Fame. Cable television from the 1980s The late 1980s and early 1990s saw the introduction of cable television and particularly MTV. Pioneering the concept of the music video, MTV ushered in a new type of advertising: the consumer tunes in for the advertising message, rather than it being a by-product or afterthought. As cable and satellite television became increasingly prevalent, specialty channels emerged, including channels entirely devoted to advertising, such as QVC, Home Shopping Network, and ShopTV Canada. Internet from the 1990s With the advent of the ad server, online advertising grew, contributing to the "dot-com" boom of the 1990s. Entire corporations operated solely on advertising revenue, offering everything from coupons to free Internet access. At the turn of the 21st century, some websites, including the search engine Google, changed online advertising by personalizing ads based on web browsing behavior. This has led to other similar efforts and an increase in interactive advertising. The share of advertising spending relative to GDP has changed little across large changes in media since 1925. In 1925, the main advertising media in America were newspapers, magazines, signs on streetcars, and outdoor posters. Advertising spending as a share of GDP was about 2.9 percent. By 1998, television and radio had become major advertising media; by 2017, the balance between broadcast and online advertising had shifted, with online spending exceeding broadcast. Nonetheless, advertising spending as a share of GDP was slightly lower – about 2.4 percent. Guerrilla marketing involves unusual approaches such as staged encounters in public places, giveaways of products such as cars that are covered with brand messages, and interactive advertising where the viewer can respond to become part of the advertising message. This type of advertising is unpredictable, which causes consumers to buy the product or idea. This reflects an increasing trend of interactive and "embedded" ads, such as via product placement, having consumers vote through text messages, and various campaigns utilizing social network services such as Facebook or Twitter. The advertising business model has also been adapted in recent years. In media for equity, advertising is not sold, but provided to start-up companies in return for equity. If the company grows and is sold, the media companies receive cash for their shares. Domain name registrants (usually those who register and renew domains as an investment) sometimes "park" their domains and allow advertising companies to place ads on their sites in return for per-click payments. These ads are typically driven by pay per click search engines like Google or Yahoo, but ads can sometimes be placed directly on targeted domain names through a domain lease or by making contact with the registrant of a domain name that describes a product. Domain name registrants are generally easy to identify through WHOIS records that are publicly available at registrar websites. Classification Advertising may be categorized in a variety of ways, including by style, target audience, geographic scope, medium, or purpose. For example, in print advertising, classification by style can include display advertising (ads with design elements sold by size) vs. classified advertising (ads without design elements sold by the word or line). Advertising may be local, national or global. An ad campaign may be directed toward consumers or to businesses. The purpose of an ad may be to raise awareness (brand advertising), or to elicit an immediate sale (direct response advertising). The term above the line (ATL) is used for advertising involving mass media; more targeted forms of advertising and promotion are referred to as below the line (BTL). The two terms date back to 1954 when Procter & Gamble began paying their advertising agencies differently from other promotional agencies. In the 2010s, as advertising technology developed, a new term, through the line (TTL) began to come into use, referring to integrated advertising campaigns. Traditional media Virtually any medium can be used for advertising. Commercial advertising media can include wall paintings, billboards, street furniture components, printed flyers and rack cards, radio, cinema and television adverts, web banners, mobile telephone screens, shopping carts, web popups, skywriting, bus stop benches, human billboards and forehead advertising, magazines, newspapers, town criers, sides of buses, banners attached to or sides of airplanes ("logojets"), in-flight advertisements on seatback tray tables or overhead storage bins, taxicab doors, roof mounts and passenger screens, musical stage shows, subway platforms and trains, elastic bands on disposable diapers, doors of bathroom stalls, stickers on apples in supermarkets, shopping cart handles (grabertising), the opening section of streaming audio and video, posters, and the backs of event tickets and supermarket receipts. Any situation in which an "identified" sponsor pays to deliver their message through a medium is advertising. Television Television advertising is one of the most expensive types of advertising; networks charge large amounts for commercial airtime during popular events. The annual Super Bowl football game in the United States is known as the most prominent advertising event on television – with an audience of over 108 million and studies showing that 50% of those only tuned in to see the advertisements. During the 2014 edition of this game, the average thirty-second ad cost US$4 million, and $8 million was charged for a 60-second spot. Virtual advertisements may be inserted into regular programming through computer graphics. It is typically inserted into otherwise blank backdrops or used to replace local billboards that are not relevant to the remote broadcast audience. Virtual billboards may be inserted into the background where none exist in real-life. This technique is especially used in televised sporting events. Virtual product placement is also possible. An infomercial is a long-format television commercial, typically five minutes or longer. The name blends the words "information" and "commercial". The main objective in an infomercial is to create an impulse purchase, so that the target sees the presentation and then immediately buys the product through the advertised toll-free telephone number or website. Infomercials describe and often demonstrate products, and commonly have testimonials from customers and industry professionals. Radio Radio advertisements are broadcast as radio waves to the air from a transmitter to an antenna and a thus to a receiving device. Airtime is purchased from a station or network in exchange for airing the commercials. While radio has the limitation of being restricted to sound, proponents of radio advertising often cite this as an advantage. Radio is an expanding medium that can be found on air, and also online. According to Arbitron, radio has approximately 241.6 million weekly listeners, or more than 93 percent of the U.S. population. Online Online advertising is a form of promotion that uses the Internet and World Wide Web for the expressed purpose of delivering marketing messages to attract customers. Online ads are delivered by an ad server. Examples of online advertising include contextual ads that appear on search engine results pages, banner ads, in pay per click text ads, rich media ads, Social network advertising, online classified advertising, advertising networks and e-mail marketing, including e-mail spam. A newer form of online advertising is Native Ads; they go in a website's news feed and are supposed to improve user experience by being less intrusive. However, some people argue this practice is deceptive. Domain names Domain name advertising is most commonly done through pay per click web search engines, however, advertisers often lease space directly on domain names that generically describe their products. When an Internet user visits a website by typing a domain name directly into their web browser, this is known as "direct navigation", or "type in" web traffic. Although many Internet users search for ideas and products using search engines and mobile phones, a large number of users around the world still use the address bar. They will type a keyword into the address bar such as "geraniums" and add ".com" to the end of it. Sometimes they will do the same with ".org" or a country-code Top Level Domain (TLD such as ".co.uk" for the United Kingdom or ".ca" for Canada). When Internet users type in a generic keyword and add .com or another top-level domain (TLD) ending, it produces a targeted sales lead. Domain name advertising was originally developed by Oingo (later known as Applied Semantics), one of Google's early acquisitions. Product placements is when a product or brand is embedded in entertainment and media. For example, in a film, the main character can use an item or other of a definite brand, as in the movie Minority Report, where Tom Cruise's character John Anderton owns a phone with the Nokia logo clearly written in the top corner, or his watch engraved with the Bulgari logo. Another example of advertising in film is in I, Robot, where main character played by Will Smith mentions his Converse shoes several times, calling them "classics", because the film is set far in the future. I, Robot and Spaceballs also showcase futuristic cars with the Audi and Mercedes-Benz logos clearly displayed on the front of the vehicles. Cadillac chose to advertise in the movie The Matrix Reloaded, which as a result contained many scenes in which Cadillac cars were used. Similarly, product placement for Omega Watches, Ford, VAIO, BMW and Aston Martin cars are featured in recent James Bond films, most notably Casino Royale. In "Fantastic Four: Rise of the Silver Surfer", the main transport vehicle shows a large Dodge logo on the front. Blade Runner includes some of the most obvious product placement; the whole film stops to show a Coca-Cola billboard. Print Print advertising describes advertising in a printed medium such as a newspaper, magazine, or trade journal. This encompasses everything from media with a very broad readership base, such as a major national newspaper or magazine, to more narrowly targeted media such as local newspapers and trade journals on very specialized topics. One form of print advertising is classified advertising, which allows private individuals or companies to purchase a small, narrowly targeted ad paid by the word or line. Another form of print advertising is the display ad, which is generally a larger ad with design elements that typically run in an article section of a newspaper. Outdoor Billboards, also known as hoardings in some parts of the world, are large structures located in public places which display advertisements to passing pedestrians and motorists. Most often, they are located on main roads with a large amount of passing motor and pedestrian traffic; however, they can be placed in any location with large numbers of viewers, such as on mass transit vehicles and in stations, in shopping malls or office buildings, and in stadiums. The form known as street advertising first came to prominence in the UK by Street Advertising Services to create outdoor advertising on street furniture and pavements. Working with products such as Reverse Graffiti, air dancers and 3D pavement advertising, for getting brand messages out into public spaces. Sheltered outdoor advertising combines outdoor with indoor advertisement by placing large mobile, structures (tents) in public places on temporary bases. The large outer advertising space aims to exert a strong pull on the observer, the product is promoted indoors, where the creative decor can intensify the impression. Mobile billboards are generally vehicle mounted billboards or digital screens. These can be on dedicated vehicles built solely for carrying advertisements along routes preselected by clients, they can also be specially equipped cargo trucks or, in some cases, large banners strewn from planes. The billboards are often lighted; some being backlit, and others employing spotlights. Some billboard displays are static, while others change; for example, continuously or periodically rotating among a set of advertisements. Mobile displays are used for various situations in metropolitan areas throughout the world, including: target advertising, one-day and long-term campaigns, conventions, sporting events, store openings and similar promotional events, and big advertisements from smaller companies. Point-of-sale In-store advertising is any advertisement placed in a retail store. It includes placement of a product in visible locations in a store, such as at eye level, at the ends of aisles and near checkout counters (a.k.a. POP – point of purchase display), eye-catching displays promoting a specific product, and advertisements in such places as shopping carts and in-store video displays. Novelties Advertising printed on small tangible items such as coffee mugs, T-shirts, pens, bags, and such is known as novelty advertising. Some printers specialize in printing novelty items, which can then be distributed directly by the advertiser, or items may be distributed as part of a cross-promotion, such as ads on fast food containers. Celebrity endorsements Advertising in which a celebrity endorses a product or brand leverages celebrity power, fame, money, popularity to gain recognition for their products or to promote specific stores' or products. Advertisers often advertise their products, for example, when celebrities share their favorite products or wear clothes by specific brands or designers. Celebrities are often involved in advertising campaigns such as television or print adverts to advertise specific or general products. The use of celebrities to endorse a brand can have its downsides, however; one mistake by a celebrity can be detrimental to the public relations of a brand. For example, following his performance of eight gold medals at the 2008 Olympic Games in Beijing, China, swimmer Michael Phelps' contract with Kellogg's was terminated, as Kellogg's did not want to associate with him after he was photographed smoking marijuana. Celebrities such as Britney Spears have advertised for multiple products including Pepsi, Candies from Kohl's, Twister, NASCAR, and Toyota. Aerial Using aircraft, balloons or airships to create or display advertising media. Skywriting is a notable example. New media approaches A new advertising approach is known as advanced advertising, which is data-driven advertising, using large quantities of data, precise measuring tools and precise targeting. Advanced advertising also makes it easier for companies which sell ad-space to attribute customer purchases to the ads they display or broadcast. Increasingly, other media are overtaking many of the "traditional" media such as television, radio and newspaper because of a shift toward the usage of the Internet for news and music as well as devices like digital video recorders (DVRs) such as TiVo. Online advertising began with unsolicited bulk e-mail advertising known as "e-mail spam". Spam has been a problem for e-mail users since 1978. As new online communication channels became available, advertising followed. The first banner ad appeared on the World Wide Web in 1994. Prices of Web-based advertising space are dependent on the "relevance" of the surrounding web content and the traffic that the website receives. In online display advertising, display ads generate awareness quickly. Unlike search, which requires someone to be aware of a need, display advertising can drive awareness of something new and without previous knowledge. Display works well for direct response. Display is not only used for generating awareness, it is used for direct response campaigns that link to a landing page with a clear 'call to action'. As the mobile phone became a new mass medium in 1998 when the first paid downloadable content appeared on mobile phones in Finland, mobile advertising followed, also first launched in Finland in 2000. By 2007 the value of mobile advertising had reached $2 billion and providers such as Admob delivered billions of mobile ads. More advanced mobile ads include banner ads, coupons, Multimedia Messaging Service picture and video messages, advergames and various engagement marketing campaigns. A particular feature driving mobile ads is the 2D barcode, which replaces the need to do any typing of web addresses, and uses the camera feature of modern phones to gain immediate access to web content. 83 percent of Japanese mobile phone users already are active users of 2D barcodes. Some companies have proposed placing messages or corporate logos on the side of booster rockets and the International Space Station. Unpaid advertising (also called "publicity advertising"), can include personal recommendations ("bring a friend", "sell it"), spreading buzz, or achieving the feat of equating a brand with a common noun (in the United States, "Xerox" = "photocopier", "Kleenex" = tissue, "Vaseline" = petroleum jelly, "Hoover" = vacuum cleaner, and "Band-Aid" = adhesive bandage). However, some companies oppose the use of their brand name to label an object. Equating a brand with a common noun also risks turning that brand into a generic trademark – turning it into a generic term which means that its legal protection as a trademark is lost. Early in its life, The CW aired short programming breaks called "Content Wraps", to advertise one company's product during an entire commercial break. The CW pioneered "content wraps" and some products featured were Herbal Essences, Crest, Guitar Hero II, CoverGirl, and Toyota. A new promotion concept has appeared, "ARvertising", advertising on augmented reality technology. Controversy exists on the effectiveness of subliminal advertising (see mind control), and the pervasiveness of mass messages (propaganda). Rise in new media With the Internet came many new advertising opportunities. Pop-up, Flash, banner, pop-under, advergaming, and email advertisements (all of which are often unwanted or spam in the case of email) are now commonplace. Particularly since the rise of "entertaining" advertising, some people may like an advertisement enough to wish to watch it later or show a friend. In general, the advertising community has not yet made this easy, although some have used the Internet to widely distribute their ads to anyone willing to see or hear them. In the last three quarters of 2009, mobile and Internet advertising grew by 18% and 9% respectively, while older media advertising saw declines: −10.1% (TV), −11.7% (radio), −14.8% (magazines) and −18.7% (newspapers). Between 2008 and 2014, U.S. newspapers lost more than half their print advertising revenue. Niche marketing Another significant trend regarding future of advertising is the growing importance of the niche market using niche or targeted ads. Also brought about by the Internet and the theory of the long tail, advertisers will have an increasing ability to reach specific audiences. In the past, the most efficient way to deliver a message was to blanket the largest mass market audience possible. However, usage tracking, customer profiles and the growing popularity of niche content brought about by everything from blogs to social networking sites, provide advertisers with audiences that are smaller but much better defined, leading to ads that are more relevant to viewers and more effective for companies' marketing products. Among others, Comcast Spotlight is one such advertiser employing this method in their video on demand menus. These advertisements are targeted to a specific group and can be viewed by anyone wishing to find out more about a particular business or practice, from their home. This causes the viewer to become proactive and actually choose what advertisements they want to view. Niche marketing could also be helped by bringing the issue of colour into advertisements. Different colours play major roles when it comes to marketing strategies, for example, seeing the blue can promote a sense of calmness and gives a sense of security which is why many social networks such as Facebook use blue in their logos. Google AdSense is an example of niche marketing. Google calculates the primary purpose of a website and adjusts ads accordingly; it uses keywords on the page (or even in emails) to find the general ideas of topics disused and places ads that will most likely be clicked on by viewers of the email account or website visitors. Crowdsourcing The concept of crowdsourcing has given way to the trend of user-generated advertisements. User-generated ads are created by people, as opposed to an advertising agency or the company themselves, often resulting from brand sponsored advertising competitions. For the 2007 Super Bowl, the Frito-Lays division of PepsiCo held the "Crash the Super Bowl" contest, allowing people to create their own Doritos commercials. Chevrolet held a similar competition for their Tahoe line of SUVs. Due to the success of the Doritos user-generated ads in the 2007 Super Bowl, Frito-Lays relaunched the competition for the 2009 and 2010 Super Bowl. The resulting ads were among the most-watched and most-liked Super Bowl ads. In fact, the winning ad that aired in the 2009 Super Bowl was ranked by the USA Today Super Bowl Ad Meter as the top ad for the year while the winning ads that aired in the 2010 Super Bowl were found by Nielsen's BuzzMetrics to be the "most buzzed-about". Another example of companies using crowdsourcing successfully is the beverage company Jones Soda that encourages consumers to participate in the label design themselves. This trend has given rise to several online platforms that host user-generated advertising competitions on behalf of a company. Founded in 2007, Zooppa has launched ad competitions for brands such as Google, Nike, Hershey's, General Mills, Microsoft, NBC Universal, Zinio, and Mini Cooper. Crowdsourcing remains controversial, as the long-term impact on the advertising industry is still unclear. Globalization Advertising has gone through five major stages of development: domestic, export, international, multi-national, and global. For global advertisers, there are four, potentially competing, business objectives that must be balanced when developing worldwide advertising: building a brand while speaking with one voice, developing economies of scale in the creative process, maximising local effectiveness of ads, and increasing the company's speed of implementation. Born from the evolutionary stages of global marketing are the three primary and fundamentally different approaches to the development of global advertising executions: exporting executions, producing local executions, and importing ideas that travel. Advertising research is key to determining the success of an ad in any country or region. The ability to identify which elements and/or moments of an ad contribute to its success is how economies of scale are maximized. Once one knows what works in an ad, that idea or ideas can be imported by any other market. Market research measures, such as Flow of Attention, Flow of Emotion and branding moments provide insight into what is working in an ad in any country or region because the measures are based on the visual, not verbal, elements of the ad. Foreign public messaging Foreign governments, particularly those that own marketable commercial products or services, often promote their interests and positions through the advertising of those goods because the target audience is not only largely unaware of the forum as a vehicle for foreign messaging but also willing to receive the message while in a mental state of absorbing information from advertisements during television commercial breaks, while reading a periodical, or while passing by billboards in public spaces. A prime example of this messaging technique is advertising campaigns to promote international travel. While advertising foreign destinations and services may stem from the typical goal of increasing revenue by drawing more tourism, some travel campaigns carry the additional or alternative intended purpose of promoting good sentiments or improving existing ones among the target audience towards a given nation or region. It is common for advertising promoting foreign countries to be produced and distributed by the tourism ministries of those countries, so these ads often carry political statements and/or depictions of the foreign government's desired international public perception. Additionally, a wide range of foreign airlines and travel-related services which advertise separately from the destinations, themselves, are owned by their respective governments; examples include, though are not limited to, the Emirates airline (Dubai), Singapore Airlines (Singapore), Qatar Airways (Qatar), China Airlines (Taiwan/Republic of China), and Air China (People's Republic of China). By depicting their destinations, airlines, and other services in a favorable and pleasant light, countries market themselves to populations abroad in a manner that could mitigate prior public impressions. Diversification In the realm of advertising agencies, continued industry diversification has seen observers note that "big global clients don't need big global agencies any more". This is reflected by the growth of non-traditional agencies in various global markets, such as Canadian business TAXI and SMART in Australia and has been referred to as "a revolution in the ad world". New technology The ability to record shows on digital video recorders (such as TiVo) allow watchers to record the programs for later viewing, enabling them to fast forward through commercials. Additionally, as more seasons of pre-recorded box sets are offered for sale of television programs; fewer people watch the shows on TV. However, the fact that these sets are sold, means the company will receive additional profits from these sets. To counter this effect, a variety of strategies have been employed. Many advertisers have opted for product placement on TV shows like Survivor. Other strategies include integrating advertising with internet-connected program guidess (EPGs), advertising on companion devices (like smartphones and tablets) during the show, and creating mobile apps for TV programs. Additionally, some like brands have opted for social television sponsorship. The emerging technology of drone displays has recently been used for advertising purposes. Education In recent years there have been several media literacy initiatives, and more specifically concerning advertising, that seek to empower citizens in the face of media advertising campaigns. Advertising education has become popular with bachelor, master and doctorate degrees becoming available in the emphasis. A surge in advertising interest is typically attributed to the strong relationship advertising plays in cultural and technological changes, such as the advance of online social networking. A unique model for teaching advertising is the student-run advertising agency, where advertising students create campaigns for real companies. Organizations such as the American Advertising Federation establish companies with students to create these campaigns. Purposes Advertising is at the front of delivering the proper message to customers and prospective customers. The purpose of advertising is to inform the consumers about their product and convince customers that a company's services or products are the best, enhance the image of the company, point out and create a need for products or services, demonstrate new uses for established products, announce new products and programs, reinforce the salespeople's individual messages, draw customers to the business, and to hold existing customers. Sales promotions and brand loyalty Sales promotions are another way to advertise. Sales promotions are double purposed because they are used to gather information about what type of customers one draws in and where they are, and to jump start sales. Sales promotions include things like contests and games, sweepstakes, product giveaways, samples coupons, loyalty programs, and discounts. The ultimate goal of sales promotions is to stimulate potential customers to action. Criticisms While advertising can be seen as necessary for economic growth, it is not without social costs. Unsolicited commercial e-mail and other forms of spam have become so prevalent as to have become a major nuisance to users of these services, as well as being a financial burden on internet service providers. Advertising is increasingly invading public spaces, such as schools, which some critics argue is a form of child exploitation. This increasing difficulty in limiting exposure to specific audiences can result in negative backlash for advertisers. In tandem with these criticisms, the advertising industry has seen low approval rates in surveys and negative cultural portrayals. One of the most controversial criticisms of advertisement in the present day is that of the predominance of advertising of foods high in sugar, fat, and salt specifically to children. Critics claim that food advertisements targeting children are exploitive and are not sufficiently balanced with proper nutritional education to help children understand the consequences of their food choices. Additionally, children may not understand that they are being sold something, and are therefore more impressionable. Michelle Obama has criticized large food companies for advertising unhealthy foods largely towards children and has requested that food companies either limit their advertising to children or advertise foods that are more in line with dietary guidelines. The other criticisms include the change that are brought by those advertisements on the society and also the deceiving ads that are aired and published by the corporations. Cosmetic and health industry are the ones which exploited the highest and created reasons of concern. A 2021 study found that for more than 80% of brands, advertising had a negative return on investment. Unsolicited ads have been criticized as attention theft. Regulation There have been increasing efforts to protect the public interest by regulating the content and the influence of advertising. Some examples include restrictions for advertising alcohol, tobacco or gambling imposed in many countries, as well as the bans around advertising to children, which exist in parts of Europe. Advertising regulation focuses heavily on the veracity of the claims and as such, there are often tighter restrictions placed around advertisements for food and healthcare products. The advertising industries within some countries rely less on laws and more on systems of self-regulation. Advertisers and the media agree on a code of advertising standards that they attempt to uphold. The general aim of such codes is to ensure that any advertising is 'legal, decent, honest and truthful'. Some self-regulatory organizations are funded by the industry, but remain independent, with the intent of upholding the standards or codes like the Advertising Standards Authority in the UK. In the UK, most forms of outdoor advertising such as the display of billboards is regulated by the UK Town and County Planning system. Currently, the display of an advertisement without consent from the Planning Authority is a criminal offense liable to a fine of £2,500 per offense. In the US, many communities believe that many forms of outdoor advertising blight the public realm. As long ago as the 1960s in the US, there were attempts to ban billboard advertising in the open countryside. Cities such as São Paulo have introduced an outright ban with London also having specific legislation to control unlawful displays. Some governments restrict the languages that can be used in advertisements, but advertisers may employ tricks to try avoiding them. In France for instance, advertisers sometimes print English words in bold and French translations in fine print to deal with Article 120 of the 1994 Toubon Law limiting the use of English. The advertising of pricing information is another topic of concern for governments. In the United States for instance, it is common for businesses to only mention the existence and amount of applicable taxes at a later stage of a transaction. In Canada and New Zealand, taxes can be listed as separate items, as long as they are quoted up-front. In most other countries, the advertised price must include all applicable taxes, enabling customers to easily know how much it will cost them. Theory Hierarchy-of-effects models Various competing models of hierarchies of effects attempt to provide a theoretical underpinning to advertising practice. The model of Clow and Baack clarifies the objectives of an advertising campaign and for each individual advertisement. The model postulates six steps a buyer moves through when making a purchase: Awareness Knowledge Liking Preference Conviction Purchase Means-end theory suggests that an advertisement should contain a message or means that leads the consumer to a desired end-state. Leverage points aim to move the consumer from understanding a product's benefits to linking those benefits with personal values. Marketing mix The marketing mix was proposed by professor E. Jerome McCarthy in the 1960s. It consists of four basic elements called the "four Ps". Product is the first P representing the actual product. Price represents the process of determining the value of a product. Place represents the variables of getting the product to the consumer such as distribution channels, market coverage and movement organization. The last P stands for Promotion which is the process of reaching the target market and convincing them to buy the product. In the 1990s, the concept of four Cs was introduced as a more customer-driven replacement of four P's. There are two theories based on four Cs: Lauterborn's four Cs (consumer, cost, communication, convenience) and Shimizu's four Cs (commodity, cost, communication, channel) in the 7Cs Compass Model (Co-marketing). Communications can include advertising, sales promotion, public relations, publicity, personal selling, corporate identity, internal communication, SNS, and MIS. Research Advertising research is a specialized form of research that works to improve the effectiveness and efficiency of advertising. It entails numerous forms of research which employ different methodologies. Advertising research includes pre-testing (also known as copy testing) and post-testing of ads and/or campaigns. Pre-testing includes a wide range of qualitative and quantitative techniques, including: focus groups, in-depth target audience interviews (one-on-one interviews), small-scale quantitative studies and physiological measurement. The goal of these investigations is to better understand how different groups respond to various messages and visual prompts, thereby providing an assessment of how well the advertisement meets its communications goals. Post-testing employs many of the same techniques as pre-testing, usually with a focus on understanding the change in awareness or attitude attributable to the advertisement. With the emergence of digital advertising technologies, many firms have begun to continuously post-test ads using real-time data. This may take the form of A/B split-testing or multivariate testing. Continuous ad tracking and the Communicus System are competing examples of post-testing advertising research types. Semiotics Meanings between consumers and marketers depict signs and symbols that are encoded in everyday objects. Semiotics is the study of signs and how they are interpreted. Advertising has many hidden signs and meanings within brand names, logos, package designs, print advertisements, and television advertisements. Semiotics aims to study and interpret the message being conveyed in (for example) advertisements. Logos and advertisements can be interpreted at two levels – known as the surface level and the underlying level. The surface level uses signs creatively to create an image or personality for a product. These signs can be images, words, fonts, colors, or slogans. The underlying level is made up of hidden meanings. The combination of images, words, colors, and slogans must be interpreted by the audience or consumer. The "key to advertising analysis" is the signifier and the signified. The signifier is the object and the signified is the mental concept. A product has a signifier and a signified. The signifier is the color, brand name, logo design, and technology. The signified has two meanings known as denotative and connotative. The denotative meaning is the meaning of the product. A television's denotative meaning might be that it is high definition. The connotative meaning is the product's deep and hidden meaning. A connotative meaning of a television would be that it is top-of-the-line. Apple's commercials used a black silhouette of a person that was the age of Apple's target market. They placed the silhouette in front of a blue screen so that the picture behind the silhouette could be constantly changing. However, the one thing that stays the same in these ads is that there is music in the background and the silhouette is listening to that music on a white iPod through white headphones. Through advertising, the white color on a set of earphones now signifies that the music device is an iPod. The white color signifies almost all of Apple's products. The semiotics of gender plays a key influence on the way in which signs are interpreted. When considering gender roles in advertising, individuals are influenced by three categories. Certain characteristics of stimuli may enhance or decrease the elaboration of the message (if the product is perceived as feminine or masculine). Second, the characteristics of individuals can affect attention and elaboration of the message (traditional or non-traditional gender role orientation). Lastly, situational factors may be important to influence the elaboration of the message. There are two types of marketing communication claims-objective and subjective. Objective claims stem from the extent to which the claim associates the brand with a tangible product or service feature. For instance, a camera may have auto-focus features. Subjective claims convey emotional, subjective, impressions of intangible aspects of a product or service. They are non-physical features of a product or service that cannot be directly perceived, as they have no physical reality. For instance the brochure has a beautiful design. Males tend to respond better to objective marketing-communications claims while females tend to respond better to subjective marketing communications claims. Voiceovers are commonly used in advertising. Most voiceovers are done by men, with figures of up to 94% having been reported. There have been more female voiceovers in recent years, but mainly for food, household products, and feminine-care products. Gender effects on comprehension According to a 1977 study by David Statt, females process information comprehensively, while males process information through heuristic devices such as procedures, methods or strategies for solving problems, which could have an effect on how they interpret advertising. According to this study, men prefer to have available and apparent cues to interpret the message, whereas females engage in more creative, associative, imagery-laced interpretation. Later research by a Danish team found that advertising attempts to persuade men to improve their appearance or performance, whereas its approach to women aims at transformation toward an impossible ideal of female presentation. In Paul Suggett's article "The Objectification of Women in Advertising" he discusses the negative impact that these women in advertisements, who are too perfect to be real, have on women, as well as men, in real life. Advertising's manipulation of women's aspiration to these ideal types as portrayed in film, in erotic art, in advertising, on stage, within music videos and through other media exposures requires at least a conditioned rejection of female reality and thereby takes on a highly ideological cast. Studies show that these expectations of women and young girls negatively affect their views about their bodies and appearances. These advertisements are directed towards men. Not everyone agrees: one critic viewed this monologic, gender-specific interpretation of advertising as excessively skewed and politicized. There are some companies like Dove and aerie that are creating commercials to portray more natural women, with less post production manipulation, so more women and young girls are able to relate to them. More recent research by Martin (2003) reveals that males and females differ in how they react to advertising depending on their mood at the time of exposure to the ads and on the affective tone of the advertising. When feeling sad, males prefer happy ads to boost their mood. In contrast, females prefer happy ads when they are feeling happy. The television programs in which ads are embedded influence a viewer's mood state. Susan Wojcicki, author of the article "Ads that Empower Women don't just Break Stereotypes—They're also Effective" discusses how advertising to women has changed since the first Barbie commercial, where a little girl tells the doll that, she wants to be just like her. Little girls grow up watching advertisements of scantily clad women advertising things from trucks to burgers and Wojcicki states that this shows girls that they are either arm candy or eye candy. Alternatives Other approaches to revenue include donations, paid subscriptions, microtransactions, and data monetization. Websites and applications are "ad-free" when not using advertisements at all for revenue. For example, the online encyclopaedia Wikipedia provides free content by receiving funding from charitable donations. "Fathers" of advertising Late 1700s – Benjamin Franklin (1706–1790) – "father of advertising in America" Late 1800s – Thomas J. Barratt (1841–1914) of London – called "the father of modern advertising" by T.F.G. Coates Early 1900s – J. Henry ("Slogan") Smythe, Jr of Philadelphia – "world's best known slogan writer" Early 1900s – Albert Lasker (1880–1952) – the "father of modern advertising"; defined advertising as "salesmanship in print, driven by a reason why" Mid-1900s – David Ogilvy (1911–1999) – advertising tycoon, founder of Ogilvy & Mather, known as the "father of advertising" Influential thinkers in advertising theory and practice N. W. Ayer & Son – probably the first advertising agency to use mass media (i.e. telegraph) in a promotional campaign Claude C. Hopkins (1866–1932) – popularised the use of test campaigns, especially coupons in direct mail, to track the efficiency of marketing spend Ernest Dichter (1907–1991) – developed the field of motivational research, used extensively in advertising E. St. Elmo Lewis (1872–1948) – developed the first hierarchy of effects model (AIDA) used in sales and advertising Arthur Nielsen (1897–1980) – founded one of the earliest international advertising agencies and developed ratings for radio & TV David Ogilvy (1911–1999) – pioneered the positioning concept and advocated of the use of brand image in advertising Charles Coolidge Parlin (1872–1942) – regarded as the pioneer of the use of marketing research in advertising Rosser Reeves (1910–1984) – developed the concept of the unique selling proposition (USP) and advocated the use of repetition in advertising Al Ries (1926–2022) – advertising executive, author and credited with coining the term "positioning" in the late 1960s Daniel Starch (1883–1979) – developed the Starch score method of measuring print media effectiveness (still in use) J Walter Thompson – one of the earliest advertising agencies See also Advertisements in schools Advertorial Annoyance factor Bibliography of advertising Branded content Commercial speech Comparative advertising Conquesting Copywriting Demo mode Direct-to-consumer advertising Family in advertising Graphic design Gross rating point History of Advertising Trust Informative advertising Integrated marketing communications List of advertising awards Local advertising Market overhang Media planning Meta-advertising Mobile marketing Performance-based advertising Promotional mix Senior media creative Shock advertising Viral marketing World Federation of Advertisers References Notes Further reading Arens, William, and Michael Weigold. Contemporary Advertising: And Integrated Marketing Communications (2012) Belch, George E., and Michael A. Belch. Advertising and Promotion: An Integrated Marketing Communications Perspective (10th ed. 2014) Biocca, Frank. Television and Political Advertising: Volume I: Psychological Processes (Routledge, 2013) Chandra, Ambarish, and Ulrich Kaiser. "Targeted advertising in magazine markets and the advent of the internet." Management Science 60.7 (2014) pp: 1829–1843. Chen, Yongmin, and Chuan He. "Paid placement: Advertising and search on the internet*." The Economic Journal 121#556 (2011): F309–F328. online Johnson-Cartee, Karen S., and Gary Copeland. Negative political advertising: Coming of age (2013) McAllister, Matthew P. and Emily West, eds. HardcoverThe Routledge Companion to Advertising and Promotional Culture (2013) McFall, Elizabeth Rose Advertising: a cultural economy (2004), cultural and sociological approaches to advertising Moriarty, Sandra, and Nancy Mitchell. Advertising & IMC: Principles and Practice (10th ed. 2014) Okorie, Nelson. The Principles of Advertising: concepts and trends in advertising (2011) Reichert, Tom, and Jacqueline Lambiase, eds. Sex in advertising: Perspectives on the erotic appeal (Routledge, 2014) Sheehan, Kim Bartel. Controversies in contemporary advertising (Sage Publications, 2013) Vestergaard, Torben and Schrøder, Kim. The Language of Advertising. Oxford: Basil Blackwell, 1985. Splendora, Anthony. "Discourse", a Review of Vestergaard and Schrøder, The Language of Advertising in Language in Society Vol. 15, No. 4 (Dec., 1986), pp. 445–449 History Brandt, Allan. The Cigarette Century (2009) Crawford, Robert. But Wait, There's More!: A History of Australian Advertising, 1900–2000 (2008) Ewen, Stuart. Captains of Consciousness: Advertising and the Social Roots of Consumer Culture. New York: McGraw-Hill, 1976. Fox, Stephen R. The mirror makers: A history of American advertising and its creators (University of Illinois Press, 1984) Friedman, Walter A. Birth of a Salesman (Harvard University Press, 2005), In the United States Jacobson, Lisa. Raising consumers: Children and the American mass market in the early twentieth century (Columbia University Press, 2013) Jamieson, Kathleen Hall. Packaging the presidency: A history and criticism of presidential campaign advertising (Oxford University Press, 1996) Laird, Pamela Walker. Advertising progress: American business and the rise of consumer marketing (Johns Hopkins University Press, 2001.) Lears, Jackson. Fables of abundance: A cultural history of advertising in America (1995) Liguori, Maria Chiara. "North and South: Advertising Prosperity in the Italian Economic Boom Years." Advertising & Society Review (2015) 15#4 Meyers, Cynthia B. A Word from Our Sponsor: Admen, Advertising, and the Golden Age of Radio (2014) Mazzarella, William. Shoveling smoke: Advertising and globalization in contemporary India (Duke University Press, 2003) Moriarty, Sandra, et al. Advertising: Principles and practice (Pearson Australia, 2014), Australian perspectives Nevett, Terence R. Advertising in Britain: a history (1982) Oram, Hugh. The advertising book: The history of advertising in Ireland (MOL Books, 1986) Presbrey, Frank. "The history and development of advertising." Advertising & Society Review (2000) 1#1 online Saunders, Thomas J. "Selling under the Swastika: Advertising and Commercial Culture in Nazi Germany." German History (2014): ghu058. Short, John Phillip. "Advertising Empire: Race and Visual Culture in Imperial Germany." Enterprise and Society (2014): khu013. Sivulka, Juliann. Soap, sex, and cigarettes: A cultural history of American advertising (Cengage Learning, 2011) Spring, Dawn. "The Globalization of American Advertising and Brand Management: A Brief History of the J. Walter Thompson Company, Proctor and Gamble, and US Foreign Policy." Global Studies Journal (2013). 5#4 Stephenson, Harry Edward, and Carlton McNaught. The Story of Advertising in Canada: A Chronicle of Fifty Years (Ryerson Press, 1940) Tungate, Mark. Adland: a global history of advertising (Kogan Page Publishers, 2007.) West, Darrell M. Air Wars: Television Advertising and Social Media in Election Campaigns, 1952–2012 (Sage, 2013) External links Hartman Center for Sales, Advertising & Marketing History at Duke University Duke University Libraries Digital Collections: Ad*Access, over 7,000 U.S. and Canadian advertisements, dated 1911–1955, includes World War II propaganda. Emergence of Advertising in America, 9,000 advertising items and publications dating from 1850 to 1940, illustrating the rise of consumer culture and the birth of a professionalized advertising industry in the United States. AdViews, vintage television commercials ROAD 2.0, 30,000 outdoor advertising images Medicine & Madison Avenue, documents advertising of medical and pharmaceutical products Art & Copy, a 2009 documentary film about the advertising industry Articles containing video clips Communication design Promotion and marketing communications Business models
2864
https://en.wikipedia.org/wiki/Archaeoastronomy
Archaeoastronomy
Archaeoastronomy (also spelled archeoastronomy) is the interdisciplinary or multidisciplinary study of how people in the past "have understood the phenomena in the sky, how they used these phenomena and what role the sky played in their cultures". Clive Ruggles argues it is misleading to consider archaeoastronomy to be the study of ancient astronomy, as modern astronomy is a scientific discipline, while archaeoastronomy considers symbolically rich cultural interpretations of phenomena in the sky by other cultures. It is often twinned with ethnoastronomy, the anthropological study of skywatching in contemporary societies. Archaeoastronomy is also closely associated with historical astronomy, the use of historical records of heavenly events to answer astronomical problems and the history of astronomy, which uses written records to evaluate past astronomical practice. Archaeoastronomy uses a variety of methods to uncover evidence of past practices including archaeology, anthropology, astronomy, statistics and probability, and history. Because these methods are diverse and use data from such different sources, integrating them into a coherent argument has been a long-term difficulty for archaeoastronomers. Archaeoastronomy fills complementary niches in landscape archaeology and cognitive archaeology. Material evidence and its connection to the sky can reveal how a wider landscape can be integrated into beliefs about the cycles of nature, such as Mayan astronomy and its relationship with agriculture. Other examples which have brought together ideas of cognition and landscape include studies of the cosmic order embedded in the roads of settlements. Archaeoastronomy can be applied to all cultures and all time periods. The meanings of the sky vary from culture to culture; nevertheless there are scientific methods which can be applied across cultures when examining ancient beliefs. It is perhaps the need to balance the social and scientific aspects of archaeoastronomy which led Clive Ruggles to describe it as "a field with academic work of high quality at one end but uncontrolled speculation bordering on lunacy at the other". History Two hundred years before John Michell wrote the above, there were no archaeoastronomers and there were no professional archaeologists, but there were astronomers and antiquarians. Some of their works are considered precursors of archaeoastronomy; antiquarians interpreted the astronomical orientation of the ruins that dotted the English countryside as William Stukeley did of Stonehenge in 1740, while John Aubrey in 1678 and Henry Chauncy in 1700 sought similar astronomical principles underlying the orientation of churches. Late in the nineteenth century astronomers such as Richard Proctor and Charles Piazzi Smyth investigated the astronomical orientations of the pyramids. The term archaeoastronomy was advanced by Elizabeth Chesley Baity (following the suggestion of Euan MacKie) in 1973, but as a topic of study it may be much older, depending on how archaeoastronomy is defined. Clive Ruggles says that Heinrich Nissen, working in the mid-nineteenth century was arguably the first archaeoastronomer. Rolf Sinclair says that Norman Lockyer, working in the late 19th and early 20th centuries, could be called the 'father of archaeoastronomy'. Euan MacKie would place the origin even later, stating: "...the genesis and modern flowering of archaeoastronomy must surely lie in the work of Alexander Thom in Britain between the 1930s and the 1970s". In the 1960s the work of the engineer Alexander Thom and that of the astronomer Gerald Hawkins, who proposed that Stonehenge was a Neolithic computer, inspired new interest in the astronomical features of ancient sites. The claims of Hawkins were largely dismissed, but this was not the case for Alexander Thom's work, whose survey results of megalithic sites hypothesized widespread practice of accurate astronomy in the British Isles. Euan MacKie, recognizing that Thom's theories needed to be tested, excavated at the Kintraw standing stone site in Argyllshire in 1970 and 1971 to check whether the latter's prediction of an observation platform on the hill slope above the stone was correct. There was an artificial platform there and this apparent verification of Thom's long alignment hypothesis (Kintraw was diagnosed as an accurate winter solstice site) led him to check Thom's geometrical theories at the Cultoon stone circle in Islay, also with a positive result. MacKie therefore broadly accepted Thom's conclusions and published new prehistories of Britain. In contrast a re-evaluation of Thom's fieldwork by Clive Ruggles argued that Thom's claims of high accuracy astronomy were not fully supported by the evidence. Nevertheless, Thom's legacy remains strong, Edwin C. Krupp wrote in 1979, "Almost singlehandedly he has established the standards for archaeo-astronomical fieldwork and interpretation, and his amazing results have stirred controversy during the last three decades." His influence endures and practice of statistical testing of data remains one of the methods of archaeoastronomy. The approach in the New World, where anthropologists began to consider more fully the role of astronomy in Amerindian civilizations, was markedly different. They had access to sources that the prehistory of Europe lacks such as ethnographies and the historical records of the early colonizers. Following the pioneering example of Anthony Aveni, this allowed New World archaeoastronomers to make claims for motives which in the Old World would have been mere speculation. The concentration on historical data led to some claims of high accuracy that were comparatively weak when compared to the statistically led investigations in Europe. This came to a head at a meeting sponsored by the International Astronomical Union (IAU) in Oxford in 1981. The methodologies and research questions of the participants were considered so different that the conference proceedings were published as two volumes. Nevertheless, the conference was considered a success in bringing researchers together and Oxford conferences have continued every four or five years at locations around the world. The subsequent conferences have resulted in a move to more interdisciplinary approaches with researchers aiming to combine the contextuality of archaeological research, which broadly describes the state of archaeoastronomy today, rather than merely establishing the existence of ancient astronomies, archaeoastronomers seek to explain why people would have an interest in the night sky. Relations to other disciplines Archaeoastronomy has long been seen as an interdisciplinary field that uses written and unwritten evidence to study the astronomies of other cultures. As such, it can be seen as connecting other disciplinary approaches for investigating ancient astronomy: astroarchaeology (an obsolete term for studies that draw astronomical information from the alignments of ancient architecture and landscapes), history of astronomy (which deals primarily with the written textual evidence), and ethnoastronomy (which draws on the ethnohistorical record and contemporary ethnographic studies). Reflecting Archaeoastronomy's development as an interdisciplinary subject, research in the field is conducted by investigators trained in a wide range of disciplines. Authors of recent doctoral dissertations have described their work as concerned with the fields of archaeology and cultural anthropology; with various fields of history including the history of specific regions and periods, the history of science and the history of religion; and with the relation of astronomy to art, literature and religion. Only rarely did they describe their work as astronomical, and then only as a secondary category. Both practicing archaeoastronomers and observers of the discipline approach it from different perspectives. Other researchers relate archaeoastronomy to the history of science, either as it relates to a culture's observations of nature and the conceptual framework they devised to impose an order on those observations or as it relates to the political motives which drove particular historical actors to deploy certain astronomical concepts or techniques. Art historian Richard Poss took a more flexible approach, maintaining that the astronomical rock art of the North American Southwest should be read employing "the hermeneutic traditions of western art history and art criticism" Astronomers, however, raise different questions, seeking to provide their students with identifiable precursors of their discipline, and are especially concerned with the important question of how to confirm that specific sites are, indeed, intentionally astronomical. The reactions of professional archaeologists to archaeoastronomy have been decidedly mixed. Some expressed incomprehension or even hostility, varying from a rejection by the archaeological mainstream of what they saw as an archaeoastronomical fringe to an incomprehension between the cultural focus of archaeologists and the quantitative focus of early archaeoastronomers. Yet archaeologists have increasingly come to incorporate many of the insights from archaeoastronomy into archaeology textbooks and, as mentioned above, some students wrote archaeology dissertations on archaeoastronomical topics. Since archaeoastronomers disagree so widely on the characterization of the discipline, they even dispute its name. All three major international scholarly associations relate archaeoastronomy to the study of culture, using the term Astronomy in Culture or a translation. Michael Hoskin sees an important part of the discipline as fact-collecting, rather than theorizing, and proposed to label this aspect of the discipline Archaeotopography. Ruggles and Saunders proposed Cultural Astronomy as a unifying term for the various methods of studying folk astronomies. Others have argued that astronomy is an inaccurate term, what are being studied are cosmologies and people who object to the use of logos have suggested adopting the Spanish cosmovisión. When debates polarise between techniques, the methods are often referred to by a colour code, based on the colours of the bindings of the two volumes from the first Oxford Conference, where the approaches were first distinguished. Green (Old World) archaeoastronomers rely heavily on statistics and are sometimes accused of missing the cultural context of what is a social practice. Brown (New World) archaeoastronomers in contrast have abundant ethnographic and historical evidence and have been described as 'cavalier' on matters of measurement and statistical analysis. Finding a way to integrate various approaches has been a subject of much discussion since the early 1990s. Methodology There is no one way to do archaeoastronomy. The divisions between archaeoastronomers tend not to be between the physical scientists and the social scientists. Instead, it tends to depend on the location and/or kind of data available to the researcher. In the Old World, there is little data but the sites themselves; in the New World, the sites were supplemented by ethnographic and historic data. The effects of the isolated development of archaeoastronomy in different places can still often be seen in research today. Research methods can be classified as falling into one of two approaches, though more recent projects often use techniques from both categories. Green archaeoastronomy Green archaeoastronomy is named after the cover of the book Archaeoastronomy in the Old World. It is based primarily on statistics and is particularly apt for prehistoric sites where the social evidence is relatively scant compared to the historic period. The basic methods were developed by Alexander Thom during his extensive surveys of British megalithic sites. Thom wished to examine whether or not prehistoric peoples used high-accuracy astronomy. He believed that by using horizon astronomy, observers could make estimates of dates in the year to a specific day. The observation required finding a place where on a specific date the Sun set into a notch on the horizon. A common theme is a mountain that blocked the Sun, but on the right day would allow the tiniest fraction to re-emerge on the other side for a 'double sunset'. The animation below shows two sunsets at a hypothetical site, one the day before the summer solstice and one at the summer solstice, which has a double sunset. To test this idea he surveyed hundreds of stone rows and circles. Any individual alignment could indicate a direction by chance, but he planned to show that together the distribution of alignments was non-random, showing that there was an astronomical intent to the orientation of at least some of the alignments. His results indicated the existence of eight, sixteen, or perhaps even thirty-two approximately equal divisions of the year. The two solstices, the two equinoxes and four cross-quarter days, days halfway between a solstice and the equinox were associated with the medieval Celtic calendar. While not all these conclusions have been accepted, it has had an enduring influence on archaeoastronomy, especially in Europe. Euan MacKie has supported Thom's analysis, to which he added an archaeological context by comparing Neolithic Britain to the Mayan civilization to argue for a stratified society in this period. To test his ideas he conducted a couple of excavations at proposed prehistoric observatories in Scotland. Kintraw is a site notable for its four-meter high standing stone. Thom proposed that this was a foresight to a point on the distant horizon between Beinn Shianaidh and Beinn o'Chaolias on Jura. This, Thom argued, was a notch on the horizon where a double sunset would occur at midwinter. However, from ground level, this sunset would be obscured by a ridge in the landscape, and the viewer would need to be raised by two meters: another observation platform was needed. This was identified across a gorge where a platform was formed from small stones. The lack of artifacts caused concern for some archaeologists and the petrofabric analysis was inconclusive, but further research at Maes Howe and on the Bush Barrow Lozenge led MacKie to conclude that while the term 'science' may be anachronistic, Thom was broadly correct upon the subject of high-accuracy alignments. In contrast Clive Ruggles has argued that there are problems with the selection of data in Thom's surveys. Others have noted that the accuracy of horizon astronomy is limited by variations in refraction near the horizon. A deeper criticism of Green archaeoastronomy is that while it can answer whether there was likely to be an interest in astronomy in past times, its lack of a social element means that it struggles to answer why people would be interested, which makes it of limited use to people asking questions about the society of the past. Keith Kintigh wrote: "To put it bluntly, in many cases it doesn't matter much to the progress of anthropology whether a particular archaeoastronomical claim is right or wrong because the information doesn't inform the current interpretive questions." Nonetheless, the study of alignments remains a staple of archaeoastronomical research, especially in Europe. Brown archaeoastronomy In contrast to the largely alignment-oriented statistically led methods of green archaeoastronomy, brown archaeoastronomy has been identified as being closer to the history of astronomy or to cultural history, insofar as it draws on historical and ethnographic records to enrich its understanding of early astronomies and their relations to calendars and ritual. The many records of native customs and beliefs made by Spanish chroniclers and ethnographic researchers means that brown archaeoastronomy is often associated with studies of astronomy in the Americas. One famous site where historical records have been used to interpret sites is Chichen Itza. Rather than analyzing the site and seeing which targets appear popular, archaeoastronomers have instead examined the ethnographic records to see what features of the sky were important to the Mayans and then sought archaeological correlates. One example which could have been overlooked without historical records is the Mayan interest in the planet Venus. This interest is attested to by the Dresden codex which contains tables with information about Venus's appearances in the sky. These cycles would have been of astrological and ritual significance as Venus was associated with Quetzalcoatl or Xolotl. Associations of architectural features with settings of Venus can be found in Chichen Itza, Uxmal, and probably some other Mesoamerican sites. The Temple of the Warriors bears iconography depicting feathered serpents associated with Quetzalcoatl or Kukulcan. This means that the building's alignment towards the place on the horizon where Venus first appears in the evening sky (when it coincides with the rainy season) may be meaningful. However, since both the date and the azimuth of this event change continuously, a solar interpretation of this orientation is much more likely. Aveni claims that another building associated with the planet Venus in the form of Kukulcan, and the rainy season at Chichen Itza is the Caracol. This is a building with a circular tower and doors facing the cardinal directions. The base faces the most northerly setting of Venus. Additionally the pillars of a stylobate on the building's upper platform were painted black and red. These are colours associated with Venus as an evening and morning star. However the windows in the tower seem to have been little more than slots, making them poor at letting light in, but providing a suitable place to view out. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles considered the interpretation that the Caracol is an observatory site was debated among specialists, meeting the second of their four levels of site credibility. Aveni states that one of the strengths of the brown methodology is that it can explore astronomies invisible to statistical analysis and offers the astronomy of the Incas as another example. The empire of the Incas was conceptually divided using ceques, radial routes emanating from the capital at Cusco. Thus there are alignments in all directions which would suggest there is little of astronomical significance, However, ethnohistorical records show that the various directions do have cosmological and astronomical significance with various points in the landscape being significant at different times of the year. In eastern Asia archaeoastronomy has developed from the history of astronomy and much archaeoastronomy is searching for material correlates of the historical record. This is due to the rich historical record of astronomical phenomena which, in China, stretches back into the Han dynasty, in the second century BC. A criticism of this method is that it can be statistically weak. Schaefer in particular has questioned how robust the claimed alignments in the Caracol are. Because of the wide variety of evidence, which can include artefacts as well as sites, there is no one way to practice archaeoastronomy. Despite this it is accepted that archaeoastronomy is not a discipline that sits in isolation. Because archaeoastronomy is an interdisciplinary field, whatever is being investigated should make sense both archaeologically and astronomically. Studies are more likely to be considered sound if they use theoretical tools found in archaeology like analogy and homology and if they can demonstrate an understanding of accuracy and precision found in astronomy. Both quantitative analyses and interpretations based on ethnographic analogies and other contextual evidence have recently been applied in systematic studies of architectural orientations in the Maya area and in other parts of Mesoamerica. Source materials Because archaeoastronomy is about the many and various ways people interacted with the sky, there are a diverse range of sources giving information about astronomical practices. Alignments A common source of data for archaeoastronomy is the study of alignments. This is based on the assumption that the axis of alignment of an archaeological site is meaningfully oriented towards an astronomical target. Brown archaeoastronomers may justify this assumption through reading historical or ethnographic sources, while green archaeoastronomers tend to prove that alignments are unlikely to be selected by chance, usually by demonstrating common patterns of alignment at multiple sites. An alignment is calculated by measuring the azimuth, the angle from north, of the structure and the altitude of the horizon it faces The azimuth is usually measured using a theodolite or a compass. A compass is easier to use, though the deviation of the Earth's magnetic field from true north, known as its magnetic declination must be taken into account. Compasses are also unreliable in areas prone to magnetic interference, such as sites being supported by scaffolding. Additionally a compass can only measure the azimuth to a precision of a half a degree. A theodolite can be considerably more accurate if used correctly, but it is also considerably more difficult to use correctly. There is no inherent way to align a theodolite with North and so the scale has to be calibrated using astronomical observation, usually the position of the Sun. Because the position of celestial bodies changes with the time of day due to the Earth's rotation, the time of these calibration observations must be accurately known, or else there will be a systematic error in the measurements. Horizon altitudes can be measured with a theodolite or a clinometer. Artifacts For artifacts such as the Sky Disc of Nebra, alleged to be a Bronze Age artefact depicting the cosmos, the analysis would be similar to typical post-excavation analysis as used in other sub-disciplines in archaeology. An artefact is examined and attempts are made to draw analogies with historical or ethnographical records of other peoples. The more parallels that can be found, the more likely an explanation is to be accepted by other archaeologists. A more mundane example is the presence of astrological symbols found on some shoes and sandals from the Roman Empire. The use of shoes and sandals is well known, but Carol van Driel-Murray has proposed that astrological symbols etched onto sandals gave the footwear spiritual or medicinal meanings. This is supported through citation of other known uses of astrological symbols and their connection to medical practice and with the historical records of the time. Another well-known artefact with an astronomical use is the Antikythera mechanism. In this case analysis of the artefact, and reference to the description of similar devices described by Cicero, would indicate a plausible use for the device. The argument is bolstered by the presence of symbols on the mechanism, allowing the disc to be read. Art and inscriptions Art and inscriptions may not be confined to artefacts, but also appear painted or inscribed on an archaeological site. Sometimes inscriptions are helpful enough to give instructions to a site's use. For example, a Greek inscription on a stele (from Itanos) has been translated as:"Patron set this up for Zeus Epopsios. Winter solstice. Should anyone wish to know: off 'the little pig' and the stele the sun turns." From Mesoamerica come Mayan and Aztec codices. These are folding books made from Amatl, processed tree bark on which are glyphs in Mayan or Aztec script. The Dresden codex contains information regarding the Venus cycle, confirming its importance to the Mayans. More problematic are those cases where the movement of the Sun at different times and seasons causes light and shadow interactions with petroglyphs. A widely known example is the Sun Dagger of Fajada Butte at which a glint of sunlight passes over a spiral petroglyph. The location of a dagger of light on the petroglyph varies throughout the year. At the summer solstice a dagger can be seen through the heart of the spiral; at the winter solstice two daggers appear to either side of it. It is proposed that this petroglyph was created to mark these events. Recent studies have identified many similar sites in the US Southwest and Northwestern Mexico. It has been argued that the number of solstitial markers at these sites provides statistical evidence that they were intended to mark the solstices. The Sun Dagger site on Fajada Butte in Chaco Canyon, New Mexico, stands out for its explicit light markings that record all the key events of both the solar and lunar cycles: summer solstice, winter solstice, equinox, and the major and minor lunar standstills of the Moon's 18.6 year cycle. In addition at two other sites on Fajada Butte, there are five light markings on petroglyphs recording the summer and winter solstices, equinox and solar noon. Numerous buildings and interbuilding alignments of the great houses of Chaco Canyon and outlying areas are oriented to the same solar and lunar directions that are marked at the Sun Dagger site. If no ethnographic nor historical data are found which can support this assertion then acceptance of the idea relies upon whether or not there are enough petroglyph sites in North America that such a correlation could occur by chance. It is helpful when petroglyphs are associated with existing peoples. This allows ethnoastronomers to question informants as to the meaning of such symbols. Ethnographies As well as the materials left by peoples themselves, there are also the reports of other who have encountered them. The historical records of the Conquistadores are a rich source of information about the pre-Columbian Americans. Ethnographers also provide material about many other peoples. Aveni uses the importance of zenith passages as an example of the importance of ethnography. For peoples living between the tropics of Cancer and Capricorn there are two days of the year when the noon Sun passes directly overhead and casts no shadow. In parts of Mesoamerica this was considered a significant day as it would herald the arrival of rains, and so play a part in the cycle of agriculture. This knowledge is still considered important amongst Mayan Indians living in Central America today. The ethnographic records suggested to archaeoastronomers that this day may have been important to the ancient Mayans. There are also shafts known as 'zenith tubes' which illuminate subterranean rooms when the Sun passes overhead found at places like Monte Albán and Xochicalco. It is only through the ethnography that we can speculate that the timing of the illumination was considered important in Mayan society. Alignments to the sunrise and sunset on the day of the zenith passage have been claimed to exist at several sites. However, it has been shown that, since there are very few orientations that can be related to these phenomena, they likely have different explanations. Ethnographies also caution against over-interpretation of sites. At a site in Chaco Canyon can be found a pictograph with a star, crescent and hand. It has been argued by some astronomers that this is a record of the 1054 Supernova. However recent reexaminations of related 'supernova petroglyphs' raises questions about such sites in general. Cotte and Ruggles used the Supernova petroglyph as an example of a completely refuted site and anthropological evidence suggests other interpretations. The Zuni people, who claim a strong ancestral affiliation with Chaco, marked their sun-watching station with a crescent, star, hand and sundisc, similar to those found at the Chaco site. Ethnoastronomy is also an important field outside of the Americas. For example, anthropological work with Aboriginal Australians is producing much information about their Indigenous astronomies and about their interaction with the modern world. Recreating the ancient sky Once the researcher has data to test, it is often necessary to attempt to recreate ancient sky conditions to place the data in its historical environment. Declination To calculate what astronomical features a structure faced a coordinate system is needed. The stars provide such a system. On a clear night observe the stars spinning around the celestial pole can be observed. This point is +90° of the North Celestial Pole or −90° observing the Southern Celestial Pole. The concentric circles the stars trace out are lines of celestial latitude, known as declination. The arc connecting the points on the horizon due East and due West (if the horizon is flat) and all points midway between the Celestial Poles is the Celestial Equator which has a declination of 0°. The visible declinations vary depending where you are on the globe. Only an observer on the North Pole of Earth would be unable to see any stars from the Southern Celestial Hemisphere at night (see diagram below). Once a declination has been found for the point on the horizon that a building faces it is then possible to say whether a specific body can be seen in that direction. Solar positioning While the stars are fixed to their declinations the Sun is not. The rising point of the Sun varies throughout the year. It swings between two limits marked by the solstices a bit like a pendulum, slowing as it reaches the extremes, but passing rapidly through the midpoint. If an archaeoastronomer can calculate from the azimuth and horizon height that a site was built to view a declination of +23.5° then he or she need not wait until 21 June to confirm the site does indeed face the summer solstice. For more information see History of solar observation. Lunar positioning The Moon's appearance is considerably more complex. Its motion, like the Sun, is between two limits—known as lunistices rather than solstices. However, its travel between lunistices is considerably faster. It takes a sidereal month to complete its cycle rather than the year-long trek of the Sun. This is further complicated as the lunistices marking the limits of the Moon's movement move on an 18.6 year cycle. For slightly over nine years the extreme limits of the Moon are outside the range of sunrise. For the remaining half of the cycle the Moon never exceeds the limits of the range of sunrise. However, much lunar observation was concerned with the phase of the Moon. The cycle from one New Moon to the next runs on an entirely different cycle, the Synodic month. Thus when examining sites for lunar significance the data can appear sparse due to the extremely variable nature of the Moon. See Moon for more details. Stellar positioning Finally there is often a need to correct for the apparent movement of the stars. On the timescale of human civilisation the stars have largely maintained the same position relative to each other. Each night they appear to rotate around the celestial poles due to the Earth's rotation about its axis. However, the Earth spins rather like a spinning top. Not only does the Earth rotate, it wobbles. The Earth's axis takes around 25,800 years to complete one full wobble. The effect to the archaeoastronomer is that stars did not rise over the horizon in the past in the same places as they do today. Nor did the stars rotate around Polaris as they do now. In the case of the Egyptian pyramids, it has been shown they were aligned towards Thuban, a faint star in the constellation of Draco. The effect can be substantial over relatively short lengths of time, historically speaking. For instance a person born on 25 December in Roman times would have been born with the Sun in the constellation Capricorn. In the modern period a person born on the same date would have the Sun in Sagittarius due to the precession of the equinoxes. Transient phenomena Additionally there are often transient phenomena, events which do not happen on an annual cycle. Most predictable are events like eclipses. In the case of solar eclipses these can be used to date events in the past. A solar eclipse mentioned by Herodotus enables us to date a battle between the Medes and the Lydians, which following the eclipse failed to happen, to 28 May, 585 BC. Other easily calculated events are supernovae whose remains are visible to astronomers and therefore their positions and magnitude can be accurately calculated. Some comets are predictable, most famously Halley's Comet. Yet as a class of object they remain unpredictable and can appear at any time. Some have extremely lengthy orbital periods which means their past appearances and returns cannot be predicted. Others may have only ever passed through the Solar System once and so are inherently unpredictable. Meteor showers should be predictable, but some meteors are cometary debris and so require calculations of orbits which are currently impossible to complete. Other events noted by ancients include aurorae, sun dogs and rainbows all of which are as impossible to predict as the ancient weather, but nevertheless may have been considered important phenomena. Major topics of archaeoastronomical research The use of calendars A common justification for the need for astronomy is the need to develop an accurate calendar for agricultural reasons. Ancient texts like Hesiod's Works and Days, an ancient farming manual, would appear to partially confirm this: astronomical observations are used in combination with ecological signs, such as bird migrations to determine the seasons. Ethnoastronomical studies of the Hopi of the southwestern United States indicate that they carefully observed the rising and setting positions of the Sun to determine the proper times to plant crops. However, ethnoastronomical work with the Mursi of Ethiopia shows that their luni-solar calendar was somewhat haphazard, indicating the limits of astronomical calendars in some societies. All the same, calendars appear to be an almost universal phenomenon in societies as they provide tools for the regulation of communal activities. One such example is the Tzolk'in calendar of 260 days. Together with the 365-day year, it was used in pre-Columbian Mesoamerica, forming part of a comprehensive calendrical system, which combined a series of astronomical observations and ritual cycles. Archaeoastronomical studies throughout Mesoamerica have shown that the orientations of most structures refer to the Sun and were used in combination with the 260-day cycle for scheduling agricultural activities and the accompanying rituals. The distribution of dates and intervals marked by orientations of monumental ceremonial complexes in the area along the southern Gulf Coast in Mexico, dated to about 1100 to 700 BCE, represents the earliest evidence of the use of this cycle. Other peculiar calendars include ancient Greek calendars. These were nominally lunar, starting with the New Moon. In reality the calendar could pause or skip days with confused citizens inscribing dates by both the civic calendar and ton theoi, by the moon. The lack of any universal calendar for ancient Greece suggests that coordination of panhellenic events such as games or rituals could be difficult and that astronomical symbolism may have been used as a politically neutral form of timekeeping. Orientation measurements in Greek temples and Byzantine churches have been associated to deity's name day, festivities, and special events. Myth and cosmology Another motive for studying the sky is to understand and explain the universe. In these cultures myth was a tool for achieving this, and the explanations, while not reflecting the standards of modern science, are cosmologies. The Incas arranged their empire to demonstrate their cosmology. The capital, Cusco, was at the centre of the empire and connected to it by means of ceques, conceptually straight lines radiating out from the centre. These ceques connected the centre of the empire to the four suyus, which were regions defined by their direction from Cusco. The notion of a quartered cosmos is common across the Andes. Gary Urton, who has conducted fieldwork in the Andean villagers of Misminay, has connected this quartering with the appearance of the Milky Way in the night sky. In one season it will bisect the sky and in another bisect it in a perpendicular fashion. The importance of observing cosmological factors is also seen on the other side of the world. The Forbidden City in Beijing is laid out to follow cosmic order though rather than observing four directions. The Chinese system was composed of five directions: North, South, East, West and Centre. The Forbidden City occupied the centre of ancient Beijing. One approaches the Emperor from the south, thus placing him in front of the circumpolar stars. This creates the situation of the heavens revolving around the person of the Emperor. The Chinese cosmology is now better known through its export as feng shui. There is also much information about how the universe was thought to work stored in the mythology of the constellations. The Barasana of the Amazon plan part of their annual cycle based on observation of the stars. When their constellation of the Caterpillar-Jaguar (roughly equivalent to the modern Scorpius) falls they prepare to catch the pupating caterpillars of the forest as they fall from the trees. The caterpillars provide food at a season when other foods are scarce. A more well-known source of constellation myth are the texts of the Greeks and Romans. The origin of their constellations remains a matter of vigorous and occasionally fractious debate. The loss of one of the sisters, Merope, in some Greek myths may reflect an astronomical event wherein one of the stars in the Pleiades disappeared from view by the naked eye. Giorgio de Santillana, professor of the History of Science in the School of Humanities at the Massachusetts Institute of Technology, along with Hertha von Dechend believed that the old mythological stories handed down from antiquity were not random fictitious tales but were accurate depictions of celestial cosmology clothed in tales to aid their oral transmission. The chaos, monsters and violence in ancient myths are representative of the forces that shape each age. They believed that ancient myths are the remains of preliterate astronomy that became lost with the rise of the Greco-Roman civilization. Santillana and von Dechend in their book Hamlet's Mill, An Essay on Myth and the Frame of Time (1969) clearly state that ancient myths have no historical or factual basis other than a cosmological one encoding astronomical phenomena, especially the precession of the equinoxes. Santillana and von Dechend's approach is not widely accepted. Displays of power By including celestial motifs in clothing it becomes possible for the wearer to make claims the power on Earth is drawn from above. It has been said that the Shield of Achilles described by Homer is also a catalogue of constellations. In North America shields depicted in Comanche petroglyphs appear to include Venus symbolism. Solsticial alignments also can be seen as displays of power. When viewed from a ceremonial plaza on the Island of the Sun (the mythical origin place of the Sun) in Lake Titicaca, the Sun was seen to rise at the June solstice between two towers on a nearby ridge. The sacred part of the island was separated from the remainder of it by a stone wall and ethnographic records indicate that access to the sacred space was restricted to members of the Inca ruling elite. Ordinary pilgrims stood on a platform outside the ceremonial area to see the solstice Sun rise between the towers. In Egypt the temple of Amun-Re at Karnak has been the subject of much study. Evaluation of the site, taking into account the change over time of the obliquity of the ecliptic show that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. In a later period the Serapeum of Alexandria was also said to have contained a solar alignment so that, on a specific sunrise, a shaft of light would pass across the lips of the statue of Serapis thus symbolising the Sun saluting the god. Major sites of archaeoastronomical interest Clive Ruggles and Michel Cotte recently edited a book on heritage sites of astronomy and archaeoastronomy which discussed a worldwide sample of astronomical and archaeoastronomical sites and provided criteria for the classification of archaeoastronomical sites. Newgrange Newgrange is a passage tomb in the Republic of Ireland dating from around 3,300 to 2,900 BC For a few days around the Winter Solstice light shines along the central passageway into the heart of the tomb. What makes this notable is not that light shines in the passageway, but that it does not do so through the main entrance. Instead it enters via a hollow box above the main doorway discovered by Michael O'Kelly. It is this roofbox which strongly indicates that the tomb was built with an astronomical aspect in mind. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Newgrange as an example of a Generally accepted site, the highest of their four levels of credibility. Clive Ruggles notes: Egypt Since the first modern measurements of the precise cardinal orientations of the pyramids by Flinders Petrie, various astronomical methods have been proposed for the original establishment of these orientations. It was recently proposed that this was done by observing the positions of two stars in the Plough / Big Dipper which was known to Egyptians as the thigh. It is thought that a vertical alignment between these two stars checked with a plumb bob was used to ascertain where north lay. The deviations from true north using this model reflect the accepted dates of construction. Some have argued that the pyramids were laid out as a map of the three stars in the belt of Orion, although this theory has been criticized by reputable astronomers. The site was instead probably governed by a spectacular hierophany which occurs at the summer solstice, when the Sun, viewed from the Sphinx terrace, forms—together with the two giant pyramids—the symbol Akhet, which was also the name of the Great Pyramid. Further, the south east corners of all the three pyramids align towards the temple of Heliopolis, as first discovered by the Egyptologist Mark Lehner. The astronomical ceiling of the tomb of Senenmut (BC) contains the Celestial Diagram depicting circumpolar constellations in the form of discs. Each disc is divided into 24 sections suggesting a 24-hour time period. Constellations are portrayed as sacred deities of Egypt. The observation of lunar cycles is also evident. El Castillo El Castillo, also known as Kukulcán's Pyramid, is a Mesoamerican step-pyramid built in the centre of Mayan center of Chichen Itza in Mexico. Several architectural features have suggested astronomical elements. Each of the stairways built into the sides of the pyramid has 91 steps. Along with the extra one for the platform at the top, this totals 365 steps, which is possibly one for each day of the year (365.25) or the number of lunar orbits in 10,000 rotations (365.01). A visually striking effect is seen every March and September as an unusual shadow occurs around the equinoxes. Light and shadow phenomena have been proposed to explain a possible architectural hierophany involving the sun at Chichén Itzá in a Maya Toltec structure dating to about 1000 CE. A shadow appears to descend the west balustrade of the northern stairway. The visual effect is of a serpent descending the stairway, with its head at the base in light. Additionally the western face points to sunset around 25 May, traditionally the date of transition from the dry to the rainy season. The intended alignment was, however, likely incorporated in the northern (main) facade of the temple, as it corresponds to sunsets on May 20 and July 24, recorded also by the central axis of Castillo at Tulum. The two dates are separated by 65 and 300 days, and it has been shown that the solar orientations in Mesoamerica regularly correspond to dates separated by calendrically significant intervals (multiples of 13 and 20 days). In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles used the "equinox hierophany" at Chichén Itzá as an example of an Unproven site, the third of their four levels of credibility. Stonehenge Many astronomical alignments have been claimed for Stonehenge, a complex of megaliths and earthworks in the Salisbury Plain of England. The most famous of these is the midsummer alignment, where the Sun rises over the Heel Stone. However, this interpretation has been challenged by some archaeologists who argue that the midwinter alignment, where the viewer is outside Stonehenge and sees the Sun setting in the henge, is the more significant alignment, and the midsummer alignment may be a coincidence due to local topography. In their discussion of the credibility of archaeoastronomical sites, Cotte and Ruggles gave Stonehenge as an example of a Generally accepted site, the highest of their four levels of credibility. As well as solar alignments, there are proposed lunar alignments. The four station stones mark out a rectangle. The short sides point towards the midsummer sunrise and midwinter sunset. The long sides if viewed towards the south-east, face the most southerly rising of the Moon. Aveni notes that these lunar alignments have never gained the acceptance that the solar alignments have received. The Heel Stone azimuth is one-seventh of circumference, matching the latitude of Avebury, while summer solstice sunrise azimuth is no longer equal to the construction era direction. Maeshowe This is an architecturally outstanding Neolithic chambered tomb on the mainland of Orkney, Scotland—probably dating to the early 3rd millennium BC, and where the setting Sun at midwinter shines down the entrance passage into the central chamber (see Newgrange). In the 1990s further investigations were carried out to discover whether this was an accurate or an approximate solar alignment. Several new aspects of the site were discovered. In the first place the entrance passage faces the hills of the island Hoy, about 10 miles away. Secondly, it consists of two straight lengths, angled at a few degrees to each other. Thirdly, the outer part is aligned towards the midwinter sunset position on a level horizon just to the left of Ward Hill on Hoy. Fourthly the inner part points directly at the Barnhouse standing stone about 400m away and then to the right end of the summit of Ward Hill, just before it dips down to the notch between it at Cuilags to the right. This indicated line points to sunset on the first Sixteenths of the solar year (according to A. Thom) before and after the winter solstice and the notch at the base of the right slope of the Hill is at the same declination. Fourthly a similar 'double sunset' phenomenon is seen at the right end of Cuilags, also on Hoy; here the date is the first Eighth of the year before and after the winter solstice, at the beginning of November and February respectively—the Old Celtic festivals of Samhain and Imbolc. This alignment is not indicated by an artificial structure but gains plausibility from the other two indicated lines. Maeshowe is thus an extremely sophisticated calendar site which must have been positioned carefully in order to use the horizon foresights in the ways described. Uxmal Uxmal is a Mayan city in the Puuc Hills of Yucatán Peninsula, Mexico. The Governor's Palace at Uxmal is often used as an exemplar of why it is important to combine ethnographic and alignment data. The palace is aligned with an azimuth of 118° on the pyramid of Cehtzuc. This alignment corresponds approximately to the southernmost rising and, with a much greater precision, to the northernmost setting of Venus; both phenomena occur once every eight years. By itself this would not be sufficient to argue for a meaningful connection between the two events. The palace has to be aligned in one direction or another and why should the rising of Venus be any more important than the rising of the Sun, Moon, other planets, Sirius et cetera? The answer given is that not only does the palace point towards significant points of Venus, it is also covered in glyphs which stand for Venus and Mayan zodiacal constellations. Moreover, the great northerly extremes of Venus always occur in late April or early May, coinciding with the onset of the rainy season. The Venus glyphs placed in the cheeks of the Maya rain god Chac, most likely referring to the concomitance of these phenomena, support the west-working orientation scheme. Chaco Canyon In Chaco Canyon, the center of the ancient Pueblo culture in the American Southwest, numerous solar and lunar light markings and architectural and road alignments have been documented. These findings date to the 1977 discovery of the Sun Dagger site by Anna Sofaer. Three large stone slabs leaning against a cliff channel light and shadow markings onto two spiral petroglyphs on the cliff wall, marking the solstices, equinoxes and the lunar standstills of the 18.6 year cycle of the moon. Subsequent research by the Solstice Project and others demonstrated that numerous building and interbuilding alignments of the great houses of Chaco Canyon are oriented to solar, lunar and cardinal directions. In addition, research shows that the Great North Road, a thirty-five mile engineered "road", was constructed not for utilitarian purposes but rather to connect the ceremonial center of Chaco Canyon with the direction north. Lascaux Cave In recent years, new research has suggested that the Lascaux cave paintings in France may incorporate prehistoric star charts. Michael Rappenglueck of the University of Munich argues that some of the non-figurative dot clusters and dots within some of the figurative images correlate with the constellations of Taurus, the Pleiades and the grouping known as the "Summer Triangle". Based on her own study of the astronomical significance of Bronze Age petroglyphs in the Vallée des Merveilles and her extensive survey of other prehistoric cave painting sites in the region—most of which appear to have been selected because the interiors are illuminated by the setting Sun on the day of the winter solstice—French researcher Chantal Jègues-Wolkiewiez has further proposed that the gallery of figurative images in the Great Hall represents an extensive star map and that key points on major figures in the group correspond to stars in the main constellations as they appeared in the Paleolithic. Appliying phylogenetics to myths of the Cosmic Hunt, Julien d'Huy suggested that the palaeolithic version of this story could be the following: there is an animal that is a horned herbivore, especially an elk. One human pursues this ungulate. The hunt locates or gets to the sky. The animal is alive when it is transformed into a constellation. It forms the Big Dipper. This story may be represented in the famous Lascaux shaft 'scene' Fringe archaeoastronomy Archaeoastronomy owes something of this poor reputation among scholars to its occasional misuse to advance a range of pseudo-historical accounts. During the 1930s, Otto S. Reuter compiled a study entitled Germanische Himmelskunde, or "Teutonic Skylore". The astronomical orientations of ancient monuments claimed by Reuter and his followers would place the ancient Germanic peoples ahead of the Ancient Near East in the field of astronomy, demonstrating the intellectual superiority of the "Aryans" (Indo-Europeans) over the Semites. More recently Gallagher, Pyle, and Fell interpreted inscriptions in West Virginia as a description in Celtic Ogham alphabet of the supposed winter solstitial marker at the site. The controversial translation was supposedly validated by a problematic archaeoastronomical indication in which the winter solstice Sun shone on an inscription of the Sun at the site. Subsequent analyses criticized its cultural inappropriateness, as well as its linguistic and archaeoastronomical claims, to describe it as an example of "cult archaeology". Archaeoastronomy is sometimes related to the fringe discipline of Archaeocryptography, when its followers attempt to find underlying mathematical orders beneath the proportions, size, and placement of archaeoastronomical sites such as Stonehenge and the Pyramid of Kukulcán at Chichen Itza. India Since the 19th century, numerous scholars have sought to use archaeoastronomical calculations to demonstrate the antiquity of Ancient Indian Vedic culture, computing the dates of astronomical observations ambiguously described in ancient poetry to as early as 4000 BC. David Pingree, a historian of Indian astronomy, condemned "the scholars who perpetrate wild theories of prehistoric science and call themselves archaeoastronomers." Organisations and publications There are currently three academic organisations for scholars of archaeoastronomy. ISAACthe International Society for Archaeoastronomy and Astronomy in Culturewas founded in 1995 and now sponsors the Oxford conferences and Archaeoastronomy – the Journal of Astronomy in Culture. SEAC – La Société Européenne pour l'Astronomie dans la Culture is slightly older; it was created in 1992. SEAC holds annual conferences in Europe and publishes refereed conference proceedings on an annual basis. There is also SIACLa Sociedad Interamericana de Astronomía en la Cultura, primarily a Latin American organisation which was founded in 2003. In 2009, the Society for Cultural Astronomy in the American Southwest (SCAAS) was founded, a regional organisation focusing on the astronomies of the native peoples of the Southwestern United States; it has since held seven meetings and workshops. Two new organisations focused on regional archaeoastronomy were founded in 2013: ASIA – the Australian Society for Indigenous Astronomy in Australia and SMART – the Society of Māori Astronomy Research and Traditions in New Zealand. Additionally, in 2017, the Romanian Society for Cultural Astronomy ex was founded. It holds an annual international conference and has published the first monograph on archaeo- and ethnoastronomy in Romania (2019). Additionally the Journal for the History of Astronomy publishes many archaeoastronomical papers. For twenty-seven volumes (from 1979 to 2002) it published an annual supplement Archaeoastronomy. The Journal of Astronomical History and Heritage (National Astronomical Research Institute of Thailand), Culture & Cosmos (University of Wales, UK) and Mediterranean Archaeology and Archaeometry (University of Aegean, Greece) also publish papers on archaeoastronomy. Various national archaeoastronomical projects have been undertaken. Among them is the program at the Tata Institute of Fundamental Research named "Archaeo Astronomy in Indian Context" that has made interesting findings in this field. See also References Citations Bibliography   . reprinted in Michael H. Shank, ed., The Scientific Enterprise in Antiquity and the Middle Ages (Chicago: Univ. of Chicago Pr., 2000), pp. 30–39. Three volumes; 217 articles. Šprajc, Ivan (2015). Governor's Palace at Uxmal. In: Handbook of Archaeoastronomy and Ethnoastronomy, ed. by Clive L. N. Ruggles, New York: Springer, pp. 773–81 Šprajc, Ivan, and Pedro Francisco Sánchez Nava (2013). Astronomía en la arquitectura de Chichén Itzá: una reevaluación. Estudios de Cultura Maya XLI: 31–60. Further reading External links Astronomy before History - A chapter from The Cambridge Concise History of Astronomy, Michael Hoskin ed., 1999. Clive Ruggles: images, bibliography, software, and synopsis of his course at the University of Leicester. Traditions of the Sun – NASA and others exploring the world's ancient observatories. Ancient Observatories: Timeless Knowledge NASA Poster on ancient (and modern) observatories. Astronomy is the most ancient of the sciences. (About Kazakh folk astronomy) Ancient astronomy Astronomical sub-disciplines Archaeological sub-disciplines Traditional knowledge Articles containing video clips
2866
https://en.wikipedia.org/wiki/Ammeter
Ammeter
An ammeter (abbreviation of Ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured. Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit. History The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers". The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered. Types Some instruments are panel meters, meant to be mounted on some sort of control panel. Of these, the flat, horizontal or vertical type is often called an edgewise meter. Moving-coil The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil placed in the magnetic field of a permanent magnet causes the coil to move. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. The uniform air gap between the iron core and the permanent magnet poles make the deflection of the meter linearly proportional to current. These meters have linear scales. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 milliamperes. Because the magnetic field is polarised, the meter needle acts in opposite directions for each direction of current. A DC ammeter is thus sensitive to which polarity it is connected in; most are marked with a positive terminal, but some have centre-zero mechanisms and can display currents in either direction. A moving coil meter indicates the average (mean) of a varying current through it, which is zero for AC. For this reason, moving-coil meters are only usable directly for DC, not AC. This type of meter movement is extremely common for both ammeters and other meters derived from them, such as voltmeters and ohmmeters. Moving magnet Moving magnet ammeters operate on essentially the same principle as moving coil, except that the coil is mounted in the meter case, and a permanent magnet moves the needle. Moving magnet Ammeters are able to carry larger currents than moving coil instruments, often several tens of Amperes, because the coil can be made of thicker wire and the current does not have to be carried by the hairsprings. Indeed, some Ammeters of this type do not have hairsprings at all, instead using a fixed permanent magnet to provide the restoring force. Electrodynamic An electrodynamic ammeter uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current and also indicates true RMS for AC. See Wattmeter for an alternative use for this instrument. Moving-iron Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. The moving-iron meter was invented by Austrian engineer Friedrich Drexler in 1884. This type of meter responds to both direct and alternating currents (as opposed to the moving-coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes, the vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs. The deflection of a moving iron meter is proportional to the square of the current. Consequently, such meters would normally have a nonlinear scale, but the iron parts are usually modified in shape to make the scale fairly linear over most of its range. Moving iron instruments indicate the RMS value of any AC waveform applied. Moving iron ammeters are commonly used to measure current in industrial frequency AC circuits. Hot-wire In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they were sometimes used in measuring radio-frequency current. These also measure true RMS for an applied AC. Digital In much the same way as the analogue ammeter formed the basis for a wide variety of derived meters, including voltmeters, the basic mechanism for a digital meter is a digital voltmeter mechanism, and other types of meter are built around this. Digital ammeter designs use a shunt resistor to produce a calibrated voltage proportional to the current flowing. This voltage is then measured by a digital voltmeter, through use of an analog-to-digital converter (ADC); the digital display is calibrated to display the current through the shunt. Such instruments are often calibrated to indicate the RMS value for a sine wave only, but many designs will indicate true RMS within limitations of the wave crest factor. Integrating There is also a range of devices referred to as integrating ammeters. In these ammeters the current is summed over time, giving as a result the product of current and time; which is proportional to the electrical charge transferred with that current. These can be used for metering energy (the charge needs to be multiplied by the voltage to give energy) or for estimating the charge of a battery or capacitor. Picoammeter A picoammeter, or pico ammeter, measures very low electric current, usually from the picoampere range at the lower end to the milliampere range at the upper end. Picoammeters are used where the current being measured is below the limits of sensitivity of other devices, such as multimeters. Most picoammeters use a "virtual short" technique and have several different measurement ranges that must be switched between to cover multiple decades of measurement. Other modern picoammeters use log compression and a "current sink" method that eliminates range switching and associated voltage spikes. Special design and usage considerations must be observed in order to reduce leakage current which may swamp measurements such as special insulators and driven shields. Triaxial cable is often used for probe connections. Application Ammeters must be connected in series with the circuit to be measured. For relatively small currents (up to a few amperes), an ammeter may pass the whole of the circuit current. For larger direct currents, a shunt resistor carries most of the circuit current and a small, accurately-known fraction of the current passes through the meter movement. For alternating current circuits, a current transformer may be used to provide a convenient small current to drive an instrument, such as 1 or 5 amperes, while the primary current to be measured is much larger (up to thousands of amperes). The use of a shunt or current transformer also allows convenient location of the indicating meter without the need to run heavy circuit conductors up to the point of observation. In the case of alternating current, the use of a current transformer also isolates the meter from the high voltage of the primary circuit. A shunt provides no such isolation for a direct-current ammeter, but where high voltages are used it may be possible to place the ammeter in the "return" side of the circuit which may be at low potential with respect to earth. Ammeters must not be connected directly across a voltage source since their internal resistance is very low and excess current would flow. Ammeters are designed for a low voltage drop across their terminals, much less than one volt; the extra circuit losses produced by the ammeter are called its "burden" on the measured circuit(I). Ordinary Weston-type meter movements can measure only milliamperes at most, because the springs and practical coils can carry only limited currents. To measure larger currents, a resistor called a shunt is placed in parallel with the meter. The resistances of shunts is in the integer to fractional milliohm range. Nearly all of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of , so shunts are typically designed to produce a voltage drop of when carrying their full rated current. To make a multi-range ammeter, a selector switch can be used to connect one of a number of shunts across the meter. It must be a make-before-break switch to avoid damaging current surges through the meter movement when switching ranges. A better arrangement is the Ayrton shunt or universal shunt, invented by William E. Ayrton, which does not require a make-before-break switch. It also avoids any inaccuracy because of contact resistance. In the figure, assuming for example, a movement with a full-scale voltage of 50 mV and desired current ranges of 10 mA, 100 mA, and 1 A, the resistance values would be: R1 = 4.5 ohms, R2 = 0.45 ohm, R3 = 0.05 ohm. And if the movement resistance is 1000 ohms, for example, R1 must be adjusted to 4.525 ohms. Switched shunts are rarely used for currents above 10 amperes. Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side. A special type of zero-center ammeter for testing high currents in cars and trucks has a pivoted bar magnet that moves the pointer, and a fixed bar magnet to keep the pointer centered with no current. The magnetic field around the wire carrying current to be measured deflects the moving magnet. Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury. In AC circuits, a current transformer can be used to convert the large current in the main circuit into a smaller current more suited to a meter. Some designs of transformer are able to directly convert the magnetic field around a conductor into a small AC current, typically either or at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current. Some recent types have a parallel pair of magnetically soft probes that are placed on either side of the conductor. See also Clamp meter Class of accuracy in electrical measurements Electric circuit Electrical measurements Electrical current#Measurement Electronics List of electronics topics Measurement category Multimeter Ohmmeter Rheoscope Voltmeter Notes References External links — from Lessons in Electric Circuits series main page Electrical meters Electronic test equipment Flow meters
2869
https://en.wikipedia.org/wiki/Anxiolytic
Anxiolytic
An anxiolytic (; also antipanic or anti-anxiety agent) is a medication or other intervention that reduces anxiety. This effect is in contrast to anxiogenic agents which increase anxiety. Anxiolytic medications are used for the treatment of anxiety disorders and their related psychological and physical symptoms. Nature of anxiety Anxiety is a naturally-occurring emotion and response. When anxiety levels exceed the tolerability of a person, anxiety disorders may occur. People with anxiety disorders can exhibit fear responses, such as defensive behaviors, high levels of alertness, and negative emotions. Those with anxiety disorders may have concurrent psychological disorders, such as depression. Anxiety disorders are classified using six possible clinical assessments: Different types of anxiety disorders will share some general symptoms while having their own distinctive symptoms. This explains why people with different types of anxiety disorders will respond differently to different classes of anti-anxiety medications. Etiology The etiology of anxiety disorder remains unknown. There are several contributing factors that are still yet to be proved to cause anxiety disorders. These factors include childhood anxiety, drug induction by central stimulant drugs, metabolic diseases or having depressive disorder. Medications Anti-anxiety medication is any drug that can be taken or prescribed for the treatment of anxiety disorders, which may be mediated by neurotransmitters like norepinephrine, serotonin, dopamine, and gamma-aminobutyric acid (GABA) in the central nervous system. Anti-anxiety medication can be classified into six types according to their different mechanisms: antidepressants, benzodiazepines, azapirones, antiepileptics, antipsychotics, and beta blockers. Antidepressants include selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants (TCA), monoamine oxidase inhibitor (MAOI). SSRIs are used in all types of anxiety disorders while SNRIs are used for generalized anxiety disorder (GAD). Both of them are considered as first-line anti-anxiety medications. TCAs are second-line treatment as they cause more significant adverse effects when compared to the first-line treatment. Benzodiazepines are effective in emergent and short-term treatment of anxiety disorders due to their fast onset but carry the risk of dependence. Buspirone is indicated for GAD, which has much slower onset but with the advantage of less sedating and withdrawal effects. History The first monoamine oxidase inhibitor (MAOI), iproniazid, was discovered accidentally when developing the new antitubercular drug isoniazid. The drug was found to induce euphoria and improve the patient's appetite and sleep quality. The first tricyclic antidepressant, imipramine, was originally developed and studied to be an antihistamine alongside other first-generation antihistamines of the time, such as promethazine. TCAs can increase the level of norepinephrine and serotonin by inhibiting their reuptake transport proteins. The majority of TCAs exert greater effect on norepinephrine, which leads to side effects like drowsiness and memory loss. In order to be more effective on serotonin agonism and avoid anticholinergic and antihistaminergic side effects, selective serotonin reuptake inhibitors (SSRI) were researched and introduced to treat anxiety disorders. The first SSRI, fluoxetine (Prozac), was discovered in 1974 and approved by FDA in 1987. After that, other SSRIs like sertraline (Zoloft), paroxetine (Paxil), and escitalopram (Lexapro) have entered the market. The first serotonin norepinephrine reuptake inhibitor (SNRI), venlafaxine (Effexor), entered the market in 1993. SNRIs can target serotonin and norepinephrine transporters while avoiding imposing significant effects on other adrenergic (α1, α2, and β), histamine (H1), muscarinic, dopamine, or postsynaptic serotonin receptors. Classifications There are six groups of anti-anxiety medications available that have been proven to be clinically significant in treatment of anxiety disorders. The groups of medications are as follows. Antidepressants Medications that are indicated for both anxiety disorders and depression. Selective serotonin reuptake inhibitors (SSRIs) and serotonin–norepinephrine reuptake inhibitors (SNRIs) are new generations of antidepressants. They have a much lower adverse effect profile than older antidepressants like monoamine oxidase inhibitors (MAOIs) and tricyclic antidepressant (TCAs). Therefore, SSRIs and SNRIs are now the first-line agent in treating long term anxiety disorders, given their applications and significance in all six types of disorders. Benzodiazepines Benzodiazepines are used for acute anxiety and could be added along with current use of SSRIs to stabilize a treatment. Long-term use in treatment plans is not recommended. Different kinds of benzodiazepine will vary in its pharmacological profile, including its strength of effect and time taken for metabolism. The choice of the benzodiazepine will depend on the corresponding profiles. Benzodiazepines are used for emergent or short-term management. They are not recommended as the first-line anti-anxiety drugs, but they can be used in combination with SSRIs/SNRIs during the initial treatment stage. Indications include panic disorder, sleep disorders, seizures, acute behavioral disturbance, muscle spasm and premedication and sedation for procedures. Azapirones Buspirone can be useful in GAD but not particularly effective in treating phobias, panic disorder or social anxiety disorders. It is a safer option for long-term use as it does not cause dependence like benzodiazepines. Antiepileptics Antiepileptics are rarely prescribed as an off-label treatment for anxiety disorders and post-traumatic stress disorders. There have been some suggestions that they may help with GAD, panic disorder and phobic symptoms but there is currently not enough research or conclusive data suggesting they are more effective than a placebo. Antipsychotics Olanzapine and risperidone are atypical antipsychotics which are also effective in GAD and PTSD treatment. However, there is a higher chance of experiencing adverse effects than the other anti-anxiety medications. Beta-adrenoceptor antagonists Propranolol is originally used for high blood pressure and heart diseases. It can also be used to treat anxiety with symptoms like tremor or increased heart rate. They work on the nervous system and alleviate the symptoms as a relief. Propranolol is also commonly used for public speaking when one is nervous. Mechanism of action Selective serotonin reuptake inhibitors (SSRI) and serotonin and norepinephrine reuptake inhibitors (SNRI) Both SSRIs and SNRIs are reuptake inhibitors of a class of nerve signal transduction chemical called neurotransmitters. Serotonin and norepinephrine are neurotransmitters that are related to nervous control in mood regulation. The level of neurotransmitters are regulated by the nerve through reuptake to avoid accumulation of the neurotransmitter at the endings of nerve fiber. By reuptaking the produced neurotransmitter, the level will go back down and ready to go back up upon excitation from a new nerve signal. However the level of patients with anxiety disorders are usually low or their nerve fibers are insensitive to the neurotransmitters. SSRIs and SNRIs will then block the channel of reuptake and increase the level of the neurotransmitter. The nerve fibers will originally inhibit further production of neurotransmitters upon the increase. However the prolonged increase will eventually desensitize the nerve about the change in level. Therefore, the action of both SSRIs and SNRIs will take 4–6 weeks to exert their full effect. Benzodiazepine Benzodiazepines bind selectively to the GABA receptor, which is the receptor protein found in the nervous system and is in control of the nervous response. Benzodiazepine will increase the entry of chloride ions into the cells by improving the binding between GABA and GABA receptors and then the better opening of the channel for chloride ion passage. The high level of chloride ion inside the nerve cells makes the nerve more difficult to depolarize and inhibit further nerve signal transduction. The excitability of the nerves then reduces and the nervous system slows down. Therefore, the drug can alleviate symptoms of anxiety disorder and make the person less nervous. Clinical use Selective serotonin reuptake inhibitors Selective serotonin reuptake inhibitors (SSRIs) are a class of medications used in the treatment of depression, anxiety disorders, OCD and some personality disorders. SSRIs are the first-line anti-anxiety medications. Serotonin is one of the crucial neurotransmitters in mood enhancement, and increasing serotonin level produces an anti-anxiety effect. SSRIs increase the serotonin level in the brain by inhibiting serotonin uptake pumps on serotonergic systems, without interactions with other receptors and ion channels. SSRIs are beneficial in both acute response and long-term maintenance treatment for both depression and anxiety disorder. SSRIs can increase anxiety initially due to negative feedback through the serotonergic autoreceptors; for this reason a concurrent benzodiazepine can be used until the anxiolytic effect of the SSRI occurs. The SSRIs paroxetine and escitalopram are USFDA approved to treat generalized anxiety disorder. Therapeutic use Adverse effect The common early side effects of SSRIs include nausea and loose stool, which can be solved by discontinuing the treatment. Headache, dizziness, insomnia are the common early side effects as well. Sexual dysfunction, anorgasmia, erectile dysfunction, and reduced libido are common adverse side effects of SSRIs. Sometimes they may persist after the cessation of treatment. Withdrawal symptoms like dizziness, headache and flu-like symptoms (fatigue/myalgia/loose stool) may occur if SSRI is stopped suddenly. The brain is incapable of upregulating the receptors to sufficient levels especially after discontinuation of the drugs with short half life like paroxetine. Both fluoxetine and its active metabolite have a long half life therefore it causes the least withdrawal symptoms. Serotonin–norepinephrine reuptake inhibitors Serotonin–norepinephrine reuptake inhibitor (SNRIs) include venlafaxine and duloxetine drugs. Venlafaxine, in extended release form, and duloxetine, are indicated for the treatment of GAD. SNRIs are as effective as SSRIs in the treatment of anxiety disorders. Tricyclic antidepressants Tricyclic antidepressants (TCAs) have anxiolytic effects; however, side effects are often more troubling or severe and overdose is dangerous. They are considered effective, but have generally been replaced by antidepressants that cause different adverse effects. Examples include imipramine, doxepin, amitriptyline, nortriptyline and desipramine. Therapeutic use Contraindication TCAs may cause drug poisoning in patients with hypotension, cardiovascular diseases and arrhythmias. Tetracyclic antidepressants Mirtazapine has demonstrated anxiolytic effect comparable to SSRIs while rarely causing or exacerbating anxiety. Mirtazapine's anxiety reduction tends to occur significantly faster than SSRIs. Monoamine oxidase inhibitors Monoamine oxidase inhibitors (MAOIs) are first-generation antidepressants effective for anxiety treatment but their dietary restrictions, adverse effect profile and availability of newer medications have limited their use. MAOIs include phenelzine, isocarboxazid and tranylcypromine. Pirlindole is a reversible MAOI that lacks dietary restriction. Barbiturates Barbiturates are powerful anxiolytics but the risk of abuse and addiction is high. Many experts consider these drugs obsolete for treating anxiety but valuable for the short-term treatment of severe insomnia, though only after benzodiazepines or non-benzodiazepines have failed. Benzodiazepines Benzodiazepines are prescribed to quell panic attacks. Benzodiazepines are also prescribed in tandem with an antidepressant for the latent period of efficacy associated with many ADs for anxiety disorder. There is risk of benzodiazepine withdrawal and rebound syndrome if BZDs are rapidly discontinued. Tolerance and dependence may occur. The risk of abuse in this class of medication is smaller than in that of barbiturates. Cognitive and behavioral adverse effects are possible. Benzodiazepines include: alprazolam (Xanax), bromazepam, chlordiazepoxide (Librium), clonazepam (Klonopin), diazepam (Valium), lorazepam (Ativan), oxazepam, temazepam, and Triazolam. Therapeutic use Adverse effect Benzodiazepines lead to central nervous system depression, resulting in common adverse effects like drowsiness, oversedation, light-headedness. Memory impairment can be a common adverse effect especially in elderly, hypersalivation, ataxia, slurred speech, psychomotor effects. Sympatholytics Sympatholytics are a group of anti-hypertensives which inhibit activity of the sympathetic nervous system. Beta blockers reduce anxiety by decreasing heart rate and preventing shaking. Beta blockers include propranolol, oxprenolol, and metoprolol. The alpha-1 agonist prazosin could be effective for PTSD. The alpha-2 agonists clonidine and guanfacine have demonstrated both anxiolytic and anxiogenic effects. Miscellaneous Buspirone Buspirone (Buspar) is a 5-HT1A receptor agonist used to treated generalized anxiety disorder. If an individual has taken a benzodiazepine, buspirone will be less effective. Pregabalin Pregabalin (Lyrica) produces anxiolytic effect after one week of use comparable to lorazepam, alprazolam, and venlafaxine with more consistent psychic and somatic anxiety reduction. Unlike BZDs, it does not disrupt sleep architecture nor does it cause cognitive or psychomotor impairment. Hydroxyzine Hydroxyzine (Atarax) is an antihistamine originally approved for clinical use by the FDA in 1956. Hydroxyzine has a calming effect which helps ameliorate anxiety. Hydroxyzine efficacy is comparable to benzodiazepines in the treatment of generalized anxiety disorder. Phenibut Phenibut (Anvifen, Fenibut, Noofen) is an anxiolytic used in Russia. Phenibut is a GABAB receptor agonist, as well as an antagonist at α2δ subunit-containing voltage-dependent calcium channels (VDCCs), similarly to gabapentinoids like gabapentin and pregabalin. The medication is not approved by the FDA for use in the United States, but is sold online as a supplement. Temgicoluril Temgicoluril (Mebicar) is an anxiolytic produced in Latvia and used in Eastern Europe. Temgicoluril has an effect on the structure of limbic-reticular activity, particularly on the hypothalamus, as well as on all four basic neuromediator systems – γ aminobutyric acid (GABA), choline, serotonin and adrenergic activity. Temgicoluril decreases noradrenaline, increases serotonin, and exerts no effect on dopamine. Fabomotizole Fabomotizole (Afobazole) is an anxiolytic drug launched in Russia in the early 2000s. Its mechanism of action is poorly-defined, with GABAergic, NGF and BDNF release promoting, MT1 receptor agonism, MT3 receptor antagonism, and sigma receptor agonism thought to have some involvement. Bromantane Bromantane is a stimulant drug with anxiolytic properties developed in Russia during the late 1980s. Bromantane acts mainly by facilitating the biosynthesis of dopamine, through indirect genomic upregulation of relevant enzymes (tyrosine hydroxylase (TH) and aromatic L-amino acid decarboxylase (AAAD)). Emoxypine Emoxypine is an antioxidant that is also a purported anxiolytic. Its chemical structure resembles that of pyridoxine, a form of vitamin B6. Menthyl isovalerate Menthyl isovalerate is a flavoring food additive marketed as a sedative and anxiolytic drug in Russia under the name Validol. Racetams Some racetam based drugs such as aniracetam can have an antianxiety effect. Alpidem Alpidem is a nonbenzodiazepine anxiolytic with similar anxiolytic effectiveness as benzodiazepines but reduced sedation and cognitive, memory, and motor impairment. It was marketed briefly in France but was withdrawn from the market due to liver toxicity. Etifoxine Etifoxine has similar anxiolytic effects as benzodiazepine drugs, but does not produce the same levels of sedation and ataxia. Further, etifoxine does not affect memory and vigilance, and does not induce rebound anxiety, drug dependence, or withdrawal symptoms. Alcohol Alcohol is sometimes used as an anxiolytic by self-medication. fMRI can measure the anxiolytic effects of alcohol in the human brain. Alternatives to medication Cognitive behavioral therapy (CBT) is an effective treatment for panic disorder, social anxiety disorder, generalized anxiety disorder, and obsessive–compulsive disorder, while exposure therapy is the recommended treatment for anxiety related phobias. Healthcare providers can guide those with anxiety disorder by referring them to self-help resources. Sometimes medication is combined with psychotherapy but research has not found a benefit of combined pharmacotherapy and psychotherapy versus monotherapy. If CBT is found ineffective, both the Canadian and American medical associations then suggest the use of medication. See also Categories References External links Anxiety disorder treatment Drug classes defined by psychological effects
2881
https://en.wikipedia.org/wiki/Alexander%20of%20Hales
Alexander of Hales
Alexander of Hales (also Halensis, Alensis, Halesius, Alesius ; 21 August 1245), also called Doctor Irrefragibilis (by Pope Alexander IV in the Bull De Fontibus Paradisi) and Theologorum Monarcha, was a Franciscan friar, theologian and philosopher important in the development of scholasticism. Life Alexander was born at Hales, Shropshire (today Halesowen, West Midlands), England, between 1180 and 1186. He came from a rather wealthy country family. He studied at the University of Paris and became a master of arts sometime before 1210. He began to read theology in 1212 or 1213, and became a regent master in 1220 or 1221. He introduced the Sentences of Peter Lombard as the basic textbook for the study of theology. During the University strike of 1229, Alexander participated in an embassy to Rome to discuss the place of Aristotle in the curriculum. Having held a prebend at Holborn (prior to 1229) and a canonry of St. Paul's in London (1226-1229), he visited England in 1230 and received a canonry and an archdeaconry in Coventry and Lichfield, his native diocese. He taught at Paris in the academic year 1232–33, but was appointed to a delegation by Henry III of England in 1235, along with Simon Langton and Fulk Basset, to negotiate the renewal of the peace between England and France. In 1236 or 1237, aged about 50, Alexander entered the Franciscan Order, having previously considered both the Cistercians and the Dominicans. He thus became the first Franciscan friar to hold a University chair. His doctrinal positions became the starting point of the Franciscan school of theology. He continued to teach and to represent the University, and participated in the First Council of Lyon in the winter of 1245. After returning to Paris, Alexander fell ill, probably due to an epidemic then sweeping the city. Shortly before his death, he passed his chair on to John of La Rochelle, setting the precedent for that chair to be held by a Franciscan. Alexander died at Paris on 21 August 1245. As the first Franciscan to hold a chair at the University of Paris, Alexander had many significant disciples. He was called Doctor Irrefragibilis (Irrefutable Teacher) and Doctor Doctorum (Teacher of Teachers). The latter title is especially suggestive of his role in forming several Franciscans who later became influential thinkers in the faculty, among them Bonaventure, John of La Rochelle, Odo Rigaldus, William of Middleton and Richard Rufus of Cornwall. Bonaventure, who may not have sat under Alexander directly, nevertheless referred to Alexander as his "father and master" and wished to "follow in his footsteps." Works Alexander is known for reflecting the works of several other Middle Age thinkers, especially those of Anselm of Canterbury and Augustine of Hippo. He was also known to quote thinkers such as Bernard of Clairvaux and Richard of Saint-Victor. He differs from those in his genre as he is known to reflect his own interests and those of his generation. When using the works of his authorities, Alexander does not only review their reasoning but also gives conclusions, expands on them, and offers his agreements and disagreement with them. He was also different in that he appeals to pre-Lombardian figures, and in his use of Anselm of Canterbury and Bernard of Clairvaux, whose works were not cited as frequently by other 12th-century scholastics. Aristotle is also quite frequently quoted in Alexander's works. Alexander was fascinated by the Pseudo-Dionysian hierarchy of angels and in how their nature can be understood, given Aristotelian metaphysics. Among the doctrines which were specially developed and, so to speak, fixed by Alexander of Hales, are the thesaurus supererogationis perfectorum (treasury of supererogatory merits) and the character indelibilis (sacramental character) of baptism, confirmation, and ordination. That doctrine had been written about much earlier by Augustine and was eventually defined a dogma by the Council of Trent. He also posed an important question about the cause of the Incarnation: would Christ have been incarnated if humanity had never sinned? The question eventually became the focal point for a philosophical issue (the theory of possible worlds) and a theological topic on the distinction between God's absolute power (potentia absoluta) and His ordained power (potentia ordinata). Summa Universae Theologiae He had written the summary/commentary of Peter Lombard's four books of the Sentences. It had exposed the trinitarian theology of the Greeks. This had been the most important writing that Alexander had claimed, and it had been the earliest in the genre. While it is common for scholars to state that Alexander was the first to write a commentary on the Sentences of Peter Lombard, it is not quite accurate. Authorship is more contentious for this work; although he started this work, he died before it could be finished, and it most likely was more a product of people other than Alexander. There were a number of "commentaries" on the Sentences, but Alexander's appears to have been the first magisterial commentary. Although it was Alexander's most significant writing, it had not been completed, therefore leaving historians with many questions on the reliability and quality of the writing. This was taken into consideration when the Summa had been examined by Victorin Doucet for different editions of them. The sources has seem to be the resulting problem of the Summa, "counted there were 4814 explicit quotations and 1372 implicit quotations from Augustine, more than one quarter of texts were cited in the body of the Summa. Of Alexander's Summa, which was on one occasion proclaimed by an assembly of seventy doctors to be infallible, Roger Bacon declared that, though it was as heavy as the weight of a horse, it was full of errors and displayed ignorance of physics, of metaphysics, and even of logic. Other historical works Alexander also influenced and sometimes is confused with Alexander Carpenter, Latinized as Fabricius (fl. 1429), who was the author of the Destructorium viciorum, a religious work popular in the 15th and 16th centuries. Carpenter also authored other works, such as "Homiliae eruditae" ("Learned Sermons"). Historiographical contribution Alexander was said to have been among the earliest scholastics to engage with Aristotle's newly translated writings. Between 1220 and 1227, he wrote Glossa in quatuor libros Sententiarum Petri Lombardi (A Gloss on the Four Books of the Sentences of Peter Lombard) (composed in the mid-12th century), which was particularly important because it was the first time that a book other than the Bible was used as a basic text for theological study. This steered the development of scholasticism in a more systematic direction, inaugurating an important tradition of writing commentaries on the Sentences as a fundamental step in the training of master theologians. A medieval scholastic In doing so, he elevated Lombard's work from a mere theological resource to the basic framework of questions and problems from which masters could teach. The commentary (or more correctly titled a Gloss) survived in student reports from Alexander's teaching in the classroom and so it provides a major insight into the way theologians taught their discipline in the 1220s. As is the case with Glossa and Quaestiones Disputatae, much of his work is probably written in the form of notes on his oral teachings by students, though the content is definitely his. For his contemporaries, however, Alexander's fame was his inexhaustible interest in disputation. His disputations prior to his becoming a Franciscan cover over 1,600 pages in their modern edition. His disputed questions after 1236 remain unpublished. Alexander was also one of the first scholastics to participate in the Quodlibetal, a university event in which a master had to respond to any question posed by any student or master over a period of three days. Alexander's Quodlibetal questions also remain unedited. Theologian At the beginning of 1236, he entered the Franciscan order (he was at least 50) and was the first Franciscan to hold a chair at the University of Paris. He held this post until shortly before his death in Paris in 1245. When he became a Franciscan and thus created a formal Franciscan school of theology at Paris, it was soon clear that his students lacked some of the basic tools for the discipline. Alexander responded by beginning a Summa theologiae that is now known as the Summa fratris Alexandri. Alexander drew mainly from his own disputations, but also selected ideas, arguments and sources from his contemporaries. It treats in its first part the doctrines of God and his attributes; in its second, those of creation and sin; in its third, those of redemption and atonement; and, in its fourth and last, those of the sacraments. This massive text, which Roger Bacon would later sarcastically describe as weighing as much as a horse, was unfinished at his death; his students, William of Middleton and John of Rupella, were charged with its completion. It was certainly read by the Franciscans at Paris, including Bonaventure. Alexander was an innovative theologian. He was part of the generation that first grappled with the writings of Aristotle. While there was a ban on using Aristotle's works as teaching texts, theologians like Alexander continued to exploit his ideas in their theology. Two other uncommon sources were promoted by Alexander: Anselm of Canterbury, whose writings had been ignored for almost a century, gained an important advocate in Alexander and he used Anselm's works extensively in his teaching on Christology and soteriology; and, Pseudo-Dionysius the Areopagite, whom Alexander used in his examination of the theology of Orders and ecclesiastical structures. Though he also continued the tradition of Aristotle- and Augustine-focused thought in the Franciscan school, he did so through an Anselm-directed lens. In fact, Alexander was one of the major influences for the advancement of Anselmian thought in the 13th century. One such example is the idea of original sin as a lack of justice. Alexander believed that original sin is both a punishment as well as a cause for punishment. That is to say, the body is corrupt, but the soul is clean. Alexander advances the idea that it would not be God's fault to create a being that would bind the ‘corrupt’ with the ‘clean’. He advanced a highly original response that the soul naturally desires the body. Consequently, God is both merciful in giving the soul what it wants, as well as just in punishing the soul for binding with the corrupt flesh. Either the soul knew that the body was corrupt, or it did not (in which case it would be “laboring under ignorance”); both of these considerations are cause for divine punishment. Alexander is also known for rejecting the idea that there are many things in God's mind, instead claiming that it is more perfect to know just one thing. He did not start off with this view, though. In the Glossa, he openly suggests the idea of the multiplicity of divine ideas. In his later work, Quaestio disputata antequam erat Frater 46, he finally rejects the plurality of divine ideas, and this theme continues through the rest of his works. Specifically, in one of his last works, De scientia divina, he concludes that the idea of plurality itself is strictly temporal, a human notion. One of his more famous works, the Summa, is important because of its system for determining if a war is just. There are six requirements for determining this: authority and attitude (in reference to who declares the war), intention and condition (in reference to the soldiers), merit (of the enemy) and just cause. Just cause becomes the overarching moral principle for declaring war in three ways: the relief of good people, coercion of the wicked, and peace for all. It is important to note that Alexander put ‘peace for all’ at the end of the list to amplify its importance. Writings Alexander of Hales. Glossa in quatuor libros sententiarum Petri Lombardi. Edited by the Quaracchi Fathers. Bibliotheca Franciscana scholastica medii aevi, t. 12–15. Rome: Collegii S. Bonaventurae, 1951–1957. Alexander of Hales. Quaestiones disputatae antequam esset frater. Edited by the Quaracchi Fathers. Bibliotheca Franciscana scholastica medii aevi, t. 19–21. Quaracchi: Collegii S. Bonaventurae,1960. Alexander of Hales (attributed). Summa universis theologiae, (Summa fratris Alexandri), edited by Bernardini Klumper and the Quaracchi Fathers, 4 vols. Rome: Collegii S. Bonaventurae, 1924–1948. Notes Further reading Boehner, Philotheus. The History of the Franciscan School, I. Alexander of Hales; II. John of Rupella – Saint Bonaventure; III. Duns Scotus; Pt. IV. William Ockham, St. Bonaventure, N.Y. : St. Bonaventure University, 1943–1946. Brady, Ignatius. C. “Sacred Scripture in the early franciscan school', in La Sacra Scrittura e i francescani. Studium Biblicum Franciscanum. Rome, 1973, 65-82. Coolman, Boyd Taylor. “Alexander of Hales,” in The Spiritual Senses: Perceiving God in Western Christianity, edited by Paul L. Gavrilyuk and Sarah Coakley. New York: Cambridge University Press, 2011, 121–139. Cullen, Christopher M. “Alexander of Hales,” in Companion to Philosophy in the Middle Ages, edited by Jorge J.E. Gracia and Timothy B. Noone. Oxford: Blackwell, 2006, 104–109. Fornaro, Italo. La teologia dell'immagine nella Glossa di Alessandro di Hales Vicenza, 1985. Osborne, Kenan B. “Alexander of Hales,” in The History of Franciscan Theology edited by idem. St. Bonaventure, NY: Franciscan Institute Publications, 1994. Peter Lombard. Sententiarum libri quattuor. Edited by the Quaracchi Fathers. Spicilegium Bonaventurianum 4, 5. Grottaferrata: Collegium S. Bonaventurae, 1971–1981. English translation by Giulio Silano, The Sentences. 4 vols. Toronto: PIMS, 2007–2010. Young, Abigail A. “Accessus ad Alexandrum: the Prefatio to the Postilla in Iohannis Euangelium of Alexander of Hales (1186?-1245).” Mediaeval Studies 52 (1990), 1-23. External links 1180s births 1245 deaths People from Halesowen 13th-century English Roman Catholic priests Catholic philosophers English Roman Catholic theologians English Friars Minor Scholastic philosophers 13th-century philosophers 13th-century English Roman Catholic theologians Writers from Shropshire Scholasticism Clergy from Shropshire Systematic theologians University of Paris alumni
2889
https://en.wikipedia.org/wiki/Amorphous%20solid
Amorphous solid
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers. Etymology The term comes from the Greek a ("without"), and morphé ("shape, form"). Structure Amorphous materials have an internal structure consisting of interconnected structural blocks that can be similar to the basic structural units found in the corresponding crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range order exists. Amorphous materials therefore cannot be defined by a finite unit cell. Statistical methods, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids. Although amorphous materials lack long range order, they exhibit localized order on small length scales. Localized order in amorphous materials can be categorized as short or medium range order. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order is then defined as the structural organization extending beyond the short range order, usually by 1-2 nm. Fundamental properties of amorphous solids Glass transition at high temperatures The freezing from liquid state to amorphous solid - glass transition - is considered one of the very important and unsolved problems of physics. Universal low-temperature properties of amorphous solids At very low temperatures (below 1-10 K), large family of amorphous solids have various similar low-temperature properties. Although there are various theoretical models, neither glass transition nor low-temperature properties of glassy solids are well understood on the fundamental physics level. Amorphous solids is an important area of condensed matter physics aiming to understand these substances at high temperatures of glass transition and at low temperatures towards absolute zero. From 1970s, low-temperature properties of amorphous solids were studied experimentally in great detail. For all of these substances, specific heat has a (nearly) linear dependence as a function of temperature, and thermal conductivity has nearly quadratic temperature dependence. These properties are conventionally called anomalous being very different from properties of crystalline solids. On the phenomenological level, many of these properties were described by a collection of tunneling two-level systems. Nevertheless, the microscopic theory of these properties is still missing after more than 50 years of the research. Remarkably, a dimensionless quantity of internal friction is nearly universal in these materials. This quantity is a dimensionless ratio (up to a numerical constant) of the phonon wavelength to the phonon mean free path. Since the theory of tunneling two-level states (TLSs) does not address the origin of the density of TLSs, this theory cannot explain the universality of internal friction, which in turn is proportional to the density of scattering TLSs. The theoretical significance of this important and unsolved problem was highlighted by Anthony Leggett. Nano-structured materials Amorphous materials will have some degree of short-range order at the atomic-length scale due to the nature of intermolecular chemical bonding. Furthermore, in very small crystals, short-range order encompasses a large fraction of the atoms; nevertheless, relaxation at the surface, along with interfacial effects, distorts the atomic positions and decreases structural order. Even the most advanced structural characterization techniques, such as X-ray diffraction and transmission electron microscopy, have difficulty distinguishing amorphous and crystalline structures at short-length scales. Characterization of amorphous solids Due to the lack of long-range order, standard crystallographic techniques are often inadequate in determining the structure of amorphous solids. A variety of electron, X-ray, and computation-based techniques have been used to characterize amorphous materials. Multi-modal analysis is very common for amorphous materials. X-ray and neutron diffraction Unlike crystalline materials which exhibit strong Bragg diffraction, the diffraction patterns of amorphous materials are characterized by broad and diffuse peaks. As a result, detailed analysis and complementary techniques are required to extract real space structural information from the diffraction patterns of amorphous materials. It is useful to obtain diffraction data from both X-ray and neutron sources as they have different scattering properties and provide complementary data. Pair distribution function analysis can be performed on diffraction data to determine the probability of finding a pair of atoms separated by a certain distance. Another type of analysis that is done with diffraction data of amorphous materials is radial distribution function analysis, which measures the number of atoms found at varying radial distances away from an arbitrary reference atom. From these techniques, the local order of an amorphous material can be elucidated. X-ray absorption fine-structure spectroscopy X-ray absorption fine-structure spectroscopy is an atomic scale probe making it useful for studying materials lacking in long range order. Spectra obtained using this method provide information on the oxidation state, coordination number, and species surrounding the atom in question as well as the distances at which they are found. Atomic electron tomography The atomic electron tomography technique is performed in transmission electron microscopes capable of reaching sub-Angstrom resolution. A collection of 2D images taken at numerous different tilt angles is acquired from the sample in question, and then used to reconstruct a 3D image. After image acquisition, a significant amount of processing must be done to correct for issues such as drift, noise, and scan distortion. High quality analysis and processing using atomic electron tomography results in a 3D reconstruction of an amorphous material detailing the atomic positions of the different species that are present. Fluctuation electron microscopy Fluctuation electron microscopy is another transmission electron microscopy based technique that is sensitive to the medium range order of amorphous materials. Structural fluctuations arising from different forms of medium range order can be detected with this method. Fluctuation electron microscopy experiments can be done in conventional or scanning transmission electron microscope mode. Computational techniques Simulation and modeling techniques are often combined with experimental methods to characterize structures of amorphous materials. Commonly used computational techniques include density functional theory, molecular dynamics, and reverse Monte Carlo. Uses and observations Amorphous thin films Amorphous phases are important constituents of thin films. Thin films are solid layers of a few nanometres to tens of micrometres thickness that are deposited onto a substrate. So-called structure zone models were developed to describe the microstructure of thin films as a function of the homologous temperature (Th), which is the ratio of deposition temperature to melting temperature. According to these models, a necessary condition for the occurrence of amorphous phases is that (Th) has to be smaller than 0.3. The deposition temperature must be below 30% of the melting temperature. Superconductivity Regarding their applications, amorphous metallic layers played an important role in the discovery of superconductivity in amorphous metals made by Buckel and Hilsch. The superconductivity of amorphous metals, including amorphous metallic thin films, is now understood to be due to phonon-mediated Cooper pairing. The role of structural disorder can be rationalized based on the strong-coupling Eliashberg theory of superconductivity. Thermal protection Amorphous solids typically exhibit higher localization of heat carriers compared to crystalline, giving rise to low thermal conductivity. Products for thermal protection, such as thermal barrier coatings and insulation, rely on materials with ultralow thermal conductivity. Technological uses Today, optical coatings made from TiO2, SiO2, Ta2O5 etc. (and combinations of these) in most cases consist of amorphous phases of these compounds. Much research is carried out into thin amorphous films as a gas separating membrane layer. The technologically most important thin amorphous film is probably represented by a few nm thin SiO2 layers serving as isolator above the conducting channel of a metal-oxide semiconductor field-effect transistor (MOSFET). Also, hydrogenated amorphous silicon (Si:H) is of technical significance for thin-film solar cells. Pharmaceutical use In the pharmaceutical industry, some amorphous drugs have been shown to offer higher bioavailability than their crystalline counterparts as a result of the higher solubility of the amorphous phase. However, certain compounds can undergo precipitation in their amorphous form in vivo, and can then decrease mutual bioavailability if administered together. In soils Amorphous materials in soil strongly influence bulk density, aggregate stability, plasticity, and water holding capacity of soils. The low bulk density and high void ratios are mostly due to glass shards and other porous minerals not becoming compacted. Andisol soils contain the highest amounts of amorphous materials. Phase The occurrence of amorphous phases turned out to be a phenomenon of particular interest for the studying of thin-film growth. The growth of polycrystalline films is often used and preceded by an initial amorphous layer, the thickness of which may amount to only a few nm. The most investigated example is represented by the unoriented molecules of thin polycrystalline silicon films. Wedge-shaped polycrystals were identified by transmission electron microscopy to grow out of the amorphous phase only after the latter has exceeded a certain thickness, the precise value of which depends on deposition temperature, background pressure, and various other process parameters. The phenomenon has been interpreted in the framework of Ostwald's rule of stages that predicts the formation of phases to proceed with increasing condensation time towards increasing stability. Notes References Further reading Phases of matter Unsolved problems in physics
2890
https://en.wikipedia.org/wiki/A%20Wizard%20of%20Earthsea
A Wizard of Earthsea
A Wizard of Earthsea is a fantasy novel written by American author Ursula K. Le Guin and first published by the small press Parnassus in 1968. It is regarded as a classic of children's literature and of fantasy, within which it is widely influential. The story is set in the fictional archipelago of Earthsea and centers on a young mage named Ged, born in a village on the island of Gont. He displays great power while still a boy and joins a school of wizardry, where his prickly nature drives him into conflict with a fellow student. During a magical duel, Ged's spell goes awry and releases a shadow creature that attacks him. The novel follows Ged's journey as he seeks to be free of the creature. The book has often been described as a bildungsroman, or coming-of-age story, as it explores Ged's process of learning to cope with power and come to terms with death. The novel also carries Taoist themes about a fundamental balance in the universe of Earthsea, which wizards are supposed to maintain, closely tied to the idea that language and names have power to affect the material world and alter this balance. The structure of the story is similar to that of a traditional epic, although critics have also described it as subverting this genre in many ways, such as by making the protagonist dark-skinned in contrast to more typical white-skinned heroes. A Wizard of Earthsea received highly positive reviews, initially as a work for children and later among a general audience. It won the Boston Globe–Horn Book Award in 1969 and was one of the final recipients of the Lewis Carroll Shelf Award in 1979. Margaret Atwood called it one of the "wellsprings" of fantasy literature. Le Guin wrote five subsequent books that are collectively referred to as the Earthsea Cycle, together with A Wizard of Earthsea: The Tombs of Atuan (1971), The Farthest Shore (1972), Tehanu (1990), The Other Wind (2001), and Tales from Earthsea (2001). George Slusser described the series as a "work of high style and imagination", while Amanda Craig said that A Wizard of Earthsea was "the most thrilling, wise, and beautiful children's novel ever". Background Early concepts for the Earthsea setting were developed in two short stories, "The Rule of Names" (1964) and "The Word of Unbinding" (1964), both published in Fantastic. The stories were later collected in Le Guin's anthology The Wind's Twelve Quarters (1975). Earthsea was also used as the setting for a story she wrote in 1965 or 1966, which was never published. In 1967, Herman Schein (the publisher of Parnassus Press and the husband of Ruth Robbins, the illustrator of the book) asked Le Guin to try writing a book "for older kids", giving her complete freedom over the subject and the approach. She had no previous experience specifically with the genre of young adult literature, which rose in prominence during the late 1960s. Drawing from her short stories, she began work on A Wizard of Earthsea. She has said that the book was in part a response to the image of wizards as ancient and wise, and to wondering where they come from. She later said that she chose the medium of fantasy, and the theme of coming of age, with her intended adolescent audience in mind. The short stories published in 1964 introduced the world of Earthsea and important concepts in it, such as Le Guin's treatment of magic. "The Rule of Names" also introduced Yevaud, a dragon who features briefly in A Wizard of Earthsea. Her depiction of Earthsea was influenced by her familiarity with Native American legends as well as Norse mythology. Her knowledge of myths and legends, as well as her familial interest in anthropology, have been described by scholar Donna White as allowing her to create "entire cultures" for the islands of Earthsea. The influence of Norse lore in particular can be seen in the characters of the Kargs, who are blonde and blue-eyed, and worship two gods who are brothers. The influence of Taoist thought on Le Guin's writing is also visible in the idea of a cosmic "balance". Book Setting Earthsea itself is an archipelago, or group of islands. In the fictional history of this world, the islands were raised from the ocean by a being called Segoy. The world is inhabited by both humans and dragons, and most or all humans have some innate magical gift, some are more gifted sorcerers or wizards. The world is shown as being based on a delicate balance, which most of its inhabitants are aware of, but which is disrupted by somebody in each of the original trilogy of novels. Earthsea is pre-industrial and has diverse cultures within the widespread archipelago. Most of the characters are of the Hardic peoples, who are dark-skinned, and who populate most of the islands. Four large eastern islands are inhabited by the white-skinned Kargish people, who despise magic and see the Hardic folk as evil sorcerers: the Kargs, in turn, are viewed by the Hardic people as barbarians. The far western regions of the archipelago are the realm of the dragons. Plot summary The novel follows a young boy called Duny, nicknamed "Sparrowhawk", born on the island of Gont. Discovering that the boy has great innate power, his aunt, a witch, teaches him the little magic she knows. When his village is attacked by Kargish raiders, Duny summons a fog to conceal the village and its inhabitants, enabling the residents to drive off the Kargs. Hearing of this, the powerful mage Ogion takes him as an apprentice, and later gives him his "true name"—Ged. Ogion tries to teach Ged about the "equilibrium", the concept that magic can upset the natural order of the world if used improperly. In an attempt to impress a girl, however, Ged searches Ogion's spell books and inadvertently summons a strange shadow, which has to be banished by Ogion. Sensing Ged's eagerness to act and impatience with his slow teaching methods, Ogion asks if he would rather go to the renowned school for wizards on the island of Roke. Ged loves Ogion, but decides to go to the school. At the school, Ged meets Jasper, and is immediately on bad terms with him. He is befriended by an older student named Vetch, but generally remains aloof from anyone else. Ged's skills inspire admiration from teachers and students alike. He finds a small creature—an otak, named Hoag, and keeps it as a pet. During a festival Jasper acts condescendingly towards Ged, provoking the latter's proud nature. Ged challenges him to a duel of magic, and casts a powerful spell intended to raise the spirit of a legendary dead woman. The spell goes awry and instead releases a shadow creature, which attacks him and scars his face. The Archmage Nemmerle drives the shadow away, but at the cost of his life. Ged spends many months healing before resuming his studies. The new Archmage, Gensher, describes the shadow as an ancient evil that wishes to possess Ged, and warns him that the creature has no name. Ged eventually graduates and receives his wizard's staff. He then takes up residence in the Ninety Isles, providing the poor villagers protection from the dragons that have seized and taken up residence on the nearby island of Pendor, but discovers that he is still being sought by the shadow. Knowing that he cannot guard against both threats at the same time, he sails to Pendor and gambles his life on a guess of the adult dragon's true name. When he is proved right, the dragon offers to tell him the name of the shadow, but Ged instead extracts a promise that the dragon and his offspring will never threaten the archipelago. Chased by the shadow, Ged flees to Osskil, having heard of the stone of the Terrenon. He is attacked by the shadow, and barely escapes into the Court of Terrenon. Serret, the lady of the castle, and the same girl that Ged had tried to impress, shows him the stone, and urges Ged to speak to it, claiming it can give him limitless knowledge and power. Recognizing that the stone harbors one of the Old Powers—ancient, powerful, malevolent beings—Ged refuses. He flees and is pursued by the stone's minions, but transforms into a swift falcon and escapes. He loses his otak. Ged flies back to Ogion on Gont. Unlike Gensher, Ogion insists that all creatures have a name and advises Ged to confront the shadow. Ogion is proved right; when Ged seeks out the shadow, it flees from him. Ged pursues it in a small sailboat, until it lures him into a fog where the boat is wrecked on a reef. Ged recovers with the help of an elderly couple marooned on a small island since they were children; the woman gives Ged part of a broken bracelet as a gift. Ged patches his boat and resumes his pursuit of the creature into the East Reach. On the island of Iffish, he meets his friend Vetch, who insists on joining him. They journey east far beyond the last known lands before they finally come upon the shadow. Naming it with his own name, Ged merges with it and joyfully tells Vetch he is healed and whole. Illustrations The first edition of the book, published in 1968, was illustrated by Ruth Robbins. The cover illustration was in color, and the interior of the book contained a map of the archipelago of Earthsea. In addition, each chapter had a black-and-white illustration by Robbins, similar to a woodcut image. The images represented topics from each chapter; for instance, the very first image depicted the island of Gont, while the illustration for the chapter "The Dragon of Pendor" pictured a flying dragon. The image shown here depicts Ged sailing in his boat Lookfar, and was used in the 10th chapter, "The Open Sea", in which Ged and Vetch travel from Iffish eastward past all known lands to confront the shadow creature. Publication A Wizard of Earthsea was first published in 1968 by Parnassus Press in Berkeley, a year before The Left Hand of Darkness, Le Guin's watershed work. It was a personal landmark for Le Guin, as it represented her first attempt at writing for children; she had written only a handful of other novels and short stories prior to its publication. The book was also her first attempt at writing fantasy, rather than science-fiction. A Wizard of Earthsea was the first of Le Guin's books to receive widespread critical attention, and has been described as her best known work, as part of the Earthsea series. The book has been released in numerous editions, including an illustrated Folio Society edition released in 2015. It was also translated into a number of other languages. An omnibus edition of all of Le Guin's Earthsea works was released on the 50th anniversary of the publication of A Wizard of Earthsea in 2018. Le Guin originally intended for A Wizard of Earthsea to be a standalone novel, but decided to write a sequel after considering the loose ends in the first book, and The Tombs of Atuan was released in 1971. The Farthest Shore was written as a third volume after further consideration, and was published in 1972. The Tombs of Atuan tells of the story of Ged's attempt to make whole the ring of Erreth Akbe, half of which is buried in the tombs of Atuan in the Kargish lands, from where he must steal it. There, he meets the child priestess Tenar, on whom the book focuses. In The Farthest Shore, Ged, who has become Archmage, tries to combat a dwindling of magic across Earthsea, accompanied by Arren, a young prince. The first three books are together seen as the "original trilogy"; in each of these, Ged is shown as trying to heal some imbalance in the world. They were followed by Tehanu (1990), Tales from Earthsea (2001), and The Other Wind (2001), which are sometimes referred to as the "second trilogy". Reception As children's literature Initial recognition for the book was from children's-book critics, among whom it garnered acclaim. A Wizard of Earthsea received an even more positive response in the United Kingdom when it was released there in 1971, which, according to White, reflected the greater admiration of British critics for children's fantasy. In her 1975 annotated collection Fantasy for Children, British critic Naomi Lewis described it in the following terms: "[It is not] the easiest book for casual browsing, but readers who take the step will find themselves in one of the most important works of fantasy of our time." Similarly, literary scholar Margaret Esmonde wrote in 1981 that "Le Guin has ... enriched children's literature with what may be its finest high fantasy", while a review in The Guardian by author and journalist Amanda Craig said it was "The most thrilling, wise and beautiful children's novel ever, [written] in prose as taut and clean as a ship's sail." In discussing the book for a gathering of children's librarians Eleanor Cameron praised the world building in the story, saying "it is as if [Le Guin] herself has lived on the archipelago." Author David Mitchell called the titular character Ged a "superb creation", and argued that he was a more relatable wizard than those featured in prominent works of fantasy at the time. According to him, characters such as Gandalf were "variants on the archetype of Merlin, a Caucasian scholarly aristocrat amongst sorcerers" with little room to grow, whereas Ged developed as a character through his story. Mitchell also praised the other characters in the story, who he said seemed to have a "fully thought-out inner life" despite being fleeting presences. The 1995 Encyclopedia of Science Fiction said that the Earthsea books had been considered the finest science fiction books for children in the post-World War II period. As fantasy Commentators have noted that the Earthsea novels in general received less critical attention because they were considered children's books. Le Guin herself took exception to this treatment of children's literature, describing it as "adult chauvinist piggery". In 1976, literary scholar George Slusser criticized the "silly publication classification designating the original series as 'children's literature'". Barbara Bucknall stated that "Le Guin was not writing for young children when she wrote these fantasies, nor yet for adults. She was writing for 'older kids.' But in fact she can be read, like Tolkien, by ten-year-olds and by adults. These stories are ageless because they deal with problems that confront us at any age." Only in later years did A Wizard of Earthsea receive attention from a more general audience. Literary scholar Tom Shippey was among the first to treat A Wizard of Earthsea as serious literature, assuming in his analysis of the volume that it belonged alongside works by C. S. Lewis and Fyodor Dostoevsky, among others. Margaret Atwood said that she saw the book as "a fantasy book for adults", and added that the book could be categorized as either young adult fiction or as fantasy, but since it dealt with themes such as "life and mortality and who are we as human beings", it could be read and enjoyed by anybody older than twelve. The Encyclopedia of Science Fiction echoed this view, saying the series's appeal went "far beyond" the young adults for whom it was written. It went on to praise the book as "austere but vivid", and said the series was more thoughtful than the Narnia books by C. S. Lewis. In his 1980 history of fantasy, Brian Attebery called the Earthsea trilogy "the most challenging and richest American fantasy to date". Slusser described the Earthsea cycle as a "work of high style and imagination", and the original trilogy of books a product of "genuine epic vision". In 1974, critic Robert Scholes compared Le Guin's work favorably to that of C. S. Lewis, saying, "Where C. S. Lewis worked out a specifically Christian set of values, Ursula LeGuin works not with a theology but with an ecology, a cosmology, a reverence for the universe as a self-regulating structure." He added that Le Guin's three Earthsea novels were themselves a sufficient legacy for anybody to leave. In 2014, David Pringle called it "a beautiful story—poetic, thrilling, and profound". Accolades A Wizard of Earthsea won or contributed to several notable awards for Le Guin. It won the Boston Globe–Horn Book Award in 1969, and was one of the last winners of the Lewis Carroll Shelf Award ten years later. In 1984 it won the or the "Golden Sepulka" in Poland. In 2000 Le Guin was given the Margaret A. Edwards Award by the American Library Association for young adult literature. The award cited six of her works, including the first four Earthsea volumes, The Left Hand of Darkness, and The Beginning Place. A 1987 poll in Locus ranked A Wizard of Earthsea third among "All-Time Best Fantasy Novels", while in 2014 Pringle listed it at number 39 in his list of the 100 best novels in modern fantasy. Influence The book has been seen as widely influential within the genre of fantasy. Margaret Atwood has called A Wizard of Earthsea one of the "wellsprings" of fantasy literature. The book has been compared to major works of high fantasy such as J. R. R. Tolkien's The Lord of the Rings and L. Frank Baum's The Wonderful Wizard of Oz. The notion that names can exert power is also present in Hayao Miyazaki's 2001 film Spirited Away; critics have suggested that that idea originated with Le Guin's Earthsea series. Novelist David Mitchell, author of books such as Cloud Atlas, described A Wizard of Earthsea as having a strong influence on him, and said that he felt a desire to "wield words with the same power as Ursula Le Guin". Modern writers have credited A Wizard of Earthsea for introducing the idea of a "wizard school", which would later be made famous by the Harry Potter series of books, and with popularizing the trope of a boy wizard, also present in Harry Potter. Reviewers have also commented that the basic premise of A Wizard of Earthsea, that of a talented boy going to a wizard's school and making an enemy with whom he has a close connection, is also the premise of Harry Potter. Ged also receives a scar from the shadow, which hurts whenever the shadow is near him, just as Harry Potter's scar from Voldemort. Commenting on the similarity, Le Guin said that she did not feel that J. K. Rowling "ripped her off", but that Rowling's books received too much praise for supposed originality, and that Rowling "could have been more gracious about her predecessors. My incredulity was at the critics who found the first book wonderfully original. She has many virtues, but originality isn't one of them. That hurt." Themes Coming of age A Wizard of Earthsea focuses on Ged's adolescence and coming of age, and along with the other two works of the original Earthsea trilogy forms a part of Le Guin's dynamic portrayal of the process of growing old. The three novels together follow Ged from youth to old age, and each of them also follow the coming of age of a different character. The novel is frequently described as a Bildungsroman. Scholar Mike Cadden stated that the book is a convincing tale "to a reader as young and possibly as headstrong as Ged, and therefore sympathetic to him". Ged's coming of age is also intertwined with the physical journey he undertakes through the novel. Ged is depicted as proud and yet unsure of himself in multiple situations: early in his apprenticeship he believes Ogion to be mocking him, and later, at Roke, feels put upon by Jasper. In both cases, he believes that others do not appreciate his greatness, and Le Guin's sympathetic narration does not immediately contradict this belief. Cadden writes that Le Guin allows young readers to sympathize with Ged, and only gradually realize that there is a price to be paid for his actions, as he learns to discipline his magical powers. Similarly, as Ged begins his apprenticeship with Ogion, he imagines that he will be taught mysterious aspects of wizardry, and has visions of transforming himself into other creatures, but gradually comes to see that Ogion's important lessons are those about his own self. The passage at the end of the novel, wherein Ged finally accepts the shadow as a part of himself and is thus released from its terror, has been pointed to by reviewers as a rite of passage. Jeanne Walker, for example, wrote that the rite of passage at the end was an analogue for the entire plot of A Wizard of Earthsea, and that the plot itself plays the role of a rite of passage for an adolescent reader. Walker goes on to say, "The entire action of A Wizard of Earthsea ... portrays the hero's slow realization of what it means to be an individual in society and a self in relation to higher powers. Many readers and critics have commented on similarities between Ged's process of growing up and ideas in Jungian psychology. The young Ged has a scary encounter with a shadow creature, which he later realizes is the dark side of himself. It is only after he recognizes and merges with the shadow that he becomes a whole person. Le Guin said that she had never read Jung before writing the Earthsea novels. Le Guin described coming of age as the main theme of the book, and wrote in a 1973 essay that she chose that theme since she was writing for an adolescent audience. She stated that "Coming of age ... is a process that took me many years; I finished it, so far as I ever will, at about age thirty-one; and so I feel rather deeply about it. So do most adolescents. It's their main occupation, in fact." She also said that fantasy was best suited as a medium for describing coming of age, because exploring the subconscious was difficult using the language of "rational daily life". The coming of age that Le Guin focused on included not just psychological development, but moral changes as well. Ged needs to recognize the balance between his power and his responsibility to use it well, a recognition which comes as he travels to the stone of Terrenon and sees the temptation that that power represents. Equilibrium and Taoist themes The world of Earthsea is depicted as being based on a delicate balance, which most of its inhabitants are aware of, but which is disrupted by somebody in each of the original trilogy of novels. This includes an equilibrium between land and sea (implicit in the name Earthsea), and between people and their natural environment. In addition to physical equilibrium, there is a larger cosmic equilibrium, which everybody is aware of, and which wizards are tasked with maintaining. Describing this aspect of Earthsea, Elizabeth Cummins wrote, "The principle of balanced powers, the recognition that every act affects self, society, world, and cosmos, is both a physical and a moral principle of Le Guin's fantasy world." The concept of balance is related to the novel's other major theme of coming of age, as Ged's knowledge of the consequences of his own actions for good or ill is necessary for him to understand how the balance is maintained. While at the school of Roke, the Master Hand tells him: The influence of Taoism on Le Guin's writing is evident through much of the book, especially in her depiction of the "balance". At the end of the novel, Ged may be seen to embody the Taoist way of life, as he has learned not to act unless absolutely necessary. He has also learned that seeming opposites, like light and dark or good and evil, are actually interdependent. Light and dark themselves are recurring images within the story. Reviewers have identified this belief as evidence of a conservative ideology within the story, shared with much of fantasy. In emphasizing concerns over balance and equilibrium, scholars have argued, Le Guin essentially justifies the status quo, which wizards strive to maintain. This tendency is in contrast to Le Guin's science fiction writing, in which change is shown to have value. The nature of human evil forms a significant related theme through A Wizard of Earthsea as well as the other Earthsea novels. As with other works by Le Guin, evil is shown as a misunderstanding of the balance of life. Ged is born with great power in him, but the pride that he takes in his power leads to his downfall; he tries to demonstrate his strength by bringing a spirit back from the dead, and in performing this act against the laws of nature, releases the shadow that attacks him. Slusser suggests that although he is provoked into performing dangerous spells first by the girl on Gont and then by Jasper, this provocation exists in Ged's mind. He is shown as unwilling to look within himself and see the pride that drives him to do what he does. When he accepts the shadow into himself, he also finally accepts responsibility for his own actions, and by accepting his own mortality he is able to free himself. His companion Vetch describes the moment by saying Thus, although there are several dark powers in Earthsea (like the dragon, and the stone of Terrenon) the true evil was not one of these powers, or even death, but Ged's actions that went against the balance of nature. This is contrary to conventional Western and Christian storytelling, in which light and darkness are often considered opposites, and are seen as symbolizing good and evil, which are constantly in conflict. On two different occasions, Ged is tempted to try to defy death and evil, but eventually learns that neither can be eliminated: instead, he chooses not to serve evil, and stops denying death. True names In Le Guin's fictional universe, to know the true name of an object or a person is to have power over it. Each child is given a true name when they reach puberty, a name which they share only with close friends. Several of the dragons in the later Earthsea novels, like Orm Embar and Kalessin, are shown as living openly with their names, which do not give anybody power over them. In A Wizard of Earthsea, however, Ged is shown to have power over Yevaud. Cadden writes that this is because Yevaud still has attachment to riches and material possessions, and is thus bound by the power of his name. Wizards exert their influence over the equilibrium through the use of names, thus linking this theme to Le Guin's depiction of a cosmic balance. According to Cummins, this is Le Guin's way of demonstrating the power of language in shaping reality. Since language is the tool we use for communicating about the environment, she argues that it also allows humans to affect the environment, and the wizards' power to use names symbolizes this. Cummins went on to draw an analogy between the wizards' use of names to change things with the creative use of words in fictional writing. Shippey wrote that Earthsea magic seems to work through what he called the "Rumpelstiltskin theory", in which names have power. He argued that this portrayal was part of Le Guin's effort to emphasize the power of words over objects, which, according to Shippey, was in contrast to the ideology of other writers, such as James Frazer in The Golden Bough. Esmonde argued that each of the first three Earthsea books hinged on an act of trust. In A Wizard of Earthsea, Vetch trusts Ged with his true name when the latter is at his lowest ebb emotionally, thus giving Ged complete power over him. Ged later offers Tenar the same gift in The Tombs of Atuan, thereby allowing her to learn trust. Style and structure Language and mood A Wizard of Earthsea and other novels of the Earthsea cycle differ notably from Le Guin's early Hainish cycle works, although they were written at a similar time. George Slusser described the Earthsea works as providing a counterweight to the "excessive pessimism" of the Hainish novels. He saw the former as depicting individual action in a favorable light, in contrast to works such as "Vaster than Empires and More Slow". The Encyclopedia of Science Fiction said the book was pervaded by a "grave joyfulness". In discussing the style of her fantasy works, Le Guin herself said that in fantasy it was necessary to be clear and direct with language, because there is no known framework for the reader's mind to rest upon. The story often appears to assume that readers are familiar with the geography and history of Earthsea, a technique which allowed Le Guin to avoid exposition: a reviewer wrote that this method "gives Le Guin's world the mysterious depths of Tolkien's, but without his tiresome back-stories and versifying". In keeping with the notion of an epic, the narration switches from looking ahead into Ged's future and looking back into the past of Earthsea. At the same time, Slusser described the mood of the novel as "strange and dreamlike", fluctuating between objective reality and the thoughts in Ged's mind; some of Ged's adversaries are real, while others are phantoms. This narrative technique, which Cadden characterizes as "free indirect discourse", makes the narrator of the book seem sympathetic to the protagonist, and does not distance his thoughts from the reader. Myth and epic A Wizard of Earthsea has strong elements of an epic; for instance, Ged's place in Earthsea history is described at the very beginning of the book in the following terms: "some say the greatest, and surely the greatest voyager, was the man called Sparrowhawk, who in his day became both dragonlord and Archmage." The story also begins with words from the Earthsea song "The Creation of Éa", which forms a ritualistic beginning to the book. The teller of the story then goes on to say that it is from Ged's youth, thereby establishing context for the rest of the book. In comparison with the protagonists of many of Le Guin's other works, Ged is superficially a typical hero, a mage who sets out on a quest. Reviewers have compared A Wizard of Earthsea to epics such as Beowulf. Scholar Virginia White argued that the story followed a structure common to epics in which the protagonist begins an adventure, faces trials along the way, and eventually returns in triumph. White went on to suggest that this structure can be seen in the series as a whole, as well as in the individual volumes. Le Guin subverted many of the tropes typical to such "monomyths"; the protagonists of her story were all dark-skinned, in comparison to the white-skinned heroes more traditionally used; the Kargish antagonists, in contrast, were white-skinned, a switching of race roles that has been remarked upon by multiple critics. Critics have also cited her use of characters from multiple class backgrounds as a choice subversive to conventional Western fantasy. At the same time, reviewers questioned Le Guin's treatment of gender in A Wizard of Earthsea, and the original trilogy as a whole. Le Guin, who later became known as a feminist, chose to restrict the use of magic to men and boys in the first volume of Earthsea. Initial critical reactions to A Wizard of Earthsea saw Ged's gender as incidental. In contrast, The Tombs of Atuan saw Le Guin intentionally tell a female coming-of-age story, which was nonetheless described as perpetuating a male-dominated model of Earthsea. Tehanu (1990), published as the fourth volume of Earthsea 18 years after the third, has been described both by Le Guin and her commentators as a feminist re-imagining of the series, in which the power and status of the chief characters are reversed, and the patriarchal social structure questioned. Commenting in 1993, Le Guin wrote that she could not continue [Earthsea after 1972] until she had "wrestled with the angels of the feminist consciousness". Several critics have argued that by combining elements of epic, Bildungsroman, and young adult fiction, Le Guin succeeded in blurring the boundaries of conventional genres. In a 1975 commentary Francis Molson argued that the series should be referred to as "ethical fantasy", a term which acknowledged that the story did not always follow the tropes of heroic fantasy, and the moral questions that it raised. The term did not become popular. A similar argument was made by children's literature critic Cordelia Sherman in 1985; she argued that A Wizard of Earthsea and the rest of the series sought "to teach children by dramatic example what it means to be a good adult". Adaptations A condensed, illustrated version of the first chapter was printed by World Book in the third volume of Childcraft in 1989. Multiple audio versions of the book have been released. BBC Radio produced a radioplay version in 1996 narrated by Judi Dench, and a six-part series adapting the Earthsea novels in 2015, broadcast on Radio 4 Extra. In 2011, the work was produced as an unabridged recording performed by Robert Inglis. Two screen adaptations of the story have also been produced. An original mini-series titled Legend of Earthsea was broadcast in 2004 on the Sci Fi Channel. It is based very loosely on A Wizard of Earthsea and The Tombs of Atuan. In an article published in Salon, Le Guin expressed strong displeasure at the result. She stated that by casting a "petulant white kid" as Ged (who has red-brown skin in the book) the series "whitewashed Earthsea", and had ignored her choice to write the story of a non-white character, a choice she said was central to the book. This sentiment was shared by a review in The Ultimate Encyclopedia of Fantasy, which said that Legend of Earthsea "totally missed the point" of Le Guin's novels, "ripping out all the subtlety, nuance and beauty of the books and inserting boring cliches, painful stereotypes and a very unwelcome 'epic' war in their place". Studio Ghibli released an adaptation of the series in 2006 titled Tales from Earthsea. The film very loosely combines elements of the first, third, and fourth books into a new story. Le Guin commented with displeasure on the film-making process, saying that she had acquiesced to the adaptation believing Hayao Miyazaki would be producing the film himself, which was eventually not the case. Le Guin praised the imagery of the film, but disliked the use of violence. She also expressed dissatisfaction with the portrayal of morality, and in particular the use of a villain who could be slain as a means of resolving conflict, which she said was antithetical to the message of the book. The film received generally mixed responses. References Bibliography Further reading External links Earthsea novels 1968 American novels 1968 children's books 1968 fantasy novels American young adult novels American bildungsromans Books illustrated by Anne Yvonne Gilbert Books illustrated by Ruth Robbins Young adult fantasy novels Novels set on islands Novels set in schools American fantasy novels adapted into films American novels adapted into television shows sv:Övärlden#Trollkarlen från Övärlden
2893
https://en.wikipedia.org/wiki/Alex%20Lifeson
Alex Lifeson
Aleksandar Živojinović (born 27 August 1953), known professionally as Alex Lifeson (), is a Canadian musician, best known as the guitarist for the rock band Rush. In 1968, Lifeson co-founded a band that would later become Rush, with drummer John Rutsey and bassist and lead vocalist Jeff Jones. Jones was replaced by Geddy Lee a month later, and Rutsey was replaced by Neil Peart in 1974, and this lineup remained untouched until the band's dissolution in 2018. Throughout their entire history, Lifeson was the only continuous member who stayed in Rush since its inception, and along with bass guitarist/vocalist Geddy Lee, the only member to appear on all of the band's albums. With Rush, Lifeson played guitar, as well as other various string instruments such as mandola, mandolin, and bouzouki. He also performed backing vocals in live performances as well as the studio albums Rush (1974), Presto (1989) and Roll the Bones (1991) and occasionally played keyboards and bass pedal synthesizers. Like the other members of Rush, Lifeson performed real-time on-stage triggering of sampled instruments. Along with his bandmates Geddy Lee and Neil Peart, Lifeson was made an Officer of the Order of Canada on 9 May 1996. The trio was the first rock band to be so honoured as a group. In 2013, he was inducted with Rush into the Rock & Roll Hall of Fame. Lifeson was ranked 98th on Rolling Stone'''s list of the 100 greatest guitarists of all time and third (after Eddie Van Halen and Brian May) in a Guitar World readers' poll listing the 100 greatest guitarists. The bulk of Lifeson's work in music has been with Rush, although Lifeson has contributed to a body of work outside the band as well, including a solo album titled Victor (1994). Aside from music, Lifeson has been a painter, a licensed aircraft pilot, an actor, and the former part-owner of a Toronto bar and restaurant called The Orbit Room. Biography Early life Lifeson was born Aleksandar Živojinović (Serbian: Александар Живојиновић) in Fernie, British Columbia. His parents, Nenad and Melanija Živojinović, were Serb immigrants from Yugoslavia. He was raised in Toronto. His stage surname of "Lifeson" is a calque of his birth surname Živojinović, which can be literally translated into English as "son of life". His formal musical education began on the viola, but he abandoned it in favor of the guitar at the age of 12. Lifeson recalls what inspired him to play guitar in a 2008 interview: His first guitar was a Christmas gift from his father, a six-string Kent classical acoustic which was later replaced by an electric Japanese model. During his adolescent years, he was influenced primarily by the likes of Jimi Hendrix, Pete Townshend, Jeff Beck, Eric Clapton, Jimmy Page, Steve Hackett, and Allan Holdsworth; he explained in 2011 that "Clapton's solos seemed a little easier and more approachable. I remember sitting at my record player and moving the needle back and forth to get the solo in 'Spoonful.' But there was nothing I could do with Hendrix." In 1963, Lifeson met future Rush drummer John Rutsey in school. Both interested in music, they decided to form a band. Lifeson was primarily a self-taught guitarist with the only formal instruction coming from a high school friend in 1971 who taught classical guitar lessons. This training lasted for roughly a year and a half. Lifeson's first girlfriend, Charlene, gave birth to their eldest son, Justin, in October 1970. The couple married in 1975, and their second son, Adrian, was born two years later. Adrian is also involved in music, and performed on "At the End" and "The Big Dance" from Lifeson's 1996 solo project, Victor. Rush Lifeson's neighbour John Rutsey began experimenting on a rented drum kit. In 1968, Lifeson and Rutsey formed The Projection, which disbanded a few months later. In August 1968, following the recruitment of original bassist and vocalist Jeff Jones, Lifeson and Rutsey founded Rush. Geddy Lee, a high school friend of Lifeson, assumed Jones's role soon after. Instrumentally, Lifeson is renowned for his signature riffing, electronic effects and processing, unorthodox chord structures, and the copious arsenal of equipment he has used over the years.Alex Lifeson minor overview Guitar Player Accessed 16 July 2007 Rush was on hiatus for several years starting in 1997 owing to personal tragedies in Neil Peart's life, and Lifeson had not picked up a guitar for at least a year following those events. However, after some work in his home studio and on various side projects, Lifeson returned to the studio with Rush to begin work on 2002's Vapor Trails. Vapor Trails is the first Rush album since the 1970s to lack keyboards—as such, Lifeson used over 50 different guitars in what Shawn Hammond of Guitar Player called "his most rabid and experimental playing ever." Geddy Lee was amenable to leaving keyboards off the album due in part to Lifeson's ongoing concern about their use. Lifeson's approach to the guitar tracks for the album eschewed traditional riffs and solos in favour of "tonality and harmonic quality." During live performances, he used foot pedals to cue various synthesizer, guitar, and backing vocal effects as he played. Victor While the bulk of Lifeson's work in music has been with Rush, his first major outside work was his solo project, Victor, released in 1996. Victor was attributed as a self-titled work (i.e. Victor is attributed as the artist as well as the album title). This was done deliberately as an alternative to issuing the album explicitly under Lifeson's name. The title track is from the W. H. Auden poem, also entitled "Victor". Both son Adrian and wife Charlene also contributed to the album. Side projects Lifeson has also contributed to a body of work outside his involvement with the band in the form of instrumental contributions to other musical outfits. He made a guest appearance on the 1985 Platinum Blonde album Alien Shores performing guitar solos on the songs "Crying Over You" and "Holy Water". Later, in 1990, he appeared on Lawrence Gowan's album Lost Brotherhood to play guitar. In 1995, he guested on two tracks on Tom Cochrane's Ragged Ass Road album and then in 1996 on I Mother Earth's "Like a Girl" from the Scenery and Fish album. In 1997, he appeared on the Merry Axemas: A Guitar Christmas album. Lifeson played "The Little Drummer Boy" which was released as track 9 on the album. In 2006, Lifeson founded the Big Dirty Band, which he created for the purpose of providing original soundtrack material for Trailer Park Boys: The Movie. Lifeson jammed regularly with the Dexters (the Orbit Room house band from 1994 to 2004). Lifeson made a guest appearance on the 2007 album Fear of a Blank Planet by UK progressive rock band Porcupine Tree, contributing a solo during the song "Anesthetize". He also appeared on the 2008 album Fly Paper by Detroit progressive rockers Tiles. He plays on the track "Sacred and Mundane". Outside band related endeavours, Lifeson composed the theme for the first season of the science-fiction TV series Andromeda. He also produced three songs from the album Away from the Sun by 3 Doors Down. He was executive producer and contributor to the 2014 album "Come to Life" by Keram Malicki-Sanchez - playing guitar on the songs "Mary Magdalene", "Moving Dark Circles" and "The Devil Knows Me Well," and later on Keram's subsequent singles "Artificial Intelligence," (2019), "That Light," (2020) and "Rukh." (2021). Alex Lifeson is featured on Marco Minnemann's 2017 release Borrego, on which he played guitars on three songs and co-wrote the track "On That Note". In 2018, he played lead guitar on Fu Manchu's 18-minute mostly instrumental track "Il Mostro Atomico" from the group's Clone of the Universe album. In 2019 he was featured on the song "Charmed" from the Don Felder solo album American Rock 'n' Roll. On 15 June 2021, Lifeson released two new instrumental songs, "Kabul Blues" and "Spy House" on his website alexlifeson.com. The songs were released as a self titled project. Andy Curran played bass on both songs, and drums on "Spy House" were done by David Quinton Steinberg. Envy of None The first single, "Liar", from Envy of None's debut album was released on 12 January 2022. Envy of None consists of Lifeson, Curran, singer Maiah Wynne, and producer and engineer Alfio Annibalini. Envy of None's self-titled debut album, which includes "Liar," "Kabul Blues," and "Spy House," was released on 8 April. Television and film appearances Lifeson made his film debut as himself under his birth name in the 1973 Canadian documentary film Come on Children. He has appeared in several installments of the Canadian mockumentary franchise Trailer Park Boys. In 2003, he was featured in an episode titled "Closer to the Heart", playing a partly fictional version of himself. In the episode, he is kidnapped by Ricky and held as punishment for his inability (or refusal) to provide the main characters with free tickets to a Rush concert. In the end of the episode, Alex reconciles with the characters, and performs a duet of "Closer to the Heart" with Bubbles at the trailer park. In 2006, Lifeson appeared in Trailer Park Boys: The Movie as a traffic cop in the opening scene and in 2009 he appeared in their follow up movie, Trailer Park Boys: Countdown to Liquor Day, as an undercover vice cop in drag. In 2017, Lifeson appeared in an episode of the spin-off series Trailer Park Boys: Out of the Park: USA titled "Memphis." He also voiced Big Chunk in the first season of Trailer Park Boys: The Animated Series. In 2008, Lifeson and the rest of Rush played "Tom Sawyer" at the end of an episode of The Colbert Report. According to Colbert, this was their first appearance on American television as a band in 33 years. In 2009, he and the rest of the band appeared as themselves in the comedy I Love You, Man. Lifeson appears as the border guard in the 2009 movie Suck. Lifeson and bandmate Geddy Lee appear in the series Chicago Fire, season 4, episode 6, called "2112", which first aired on 17 November 2015. The role of Dr. Funtime in The Drunk and On Drugs Happy Funtime Hour was originally written with Lifeson in mind, but due to scheduling conflicts the role was given to Maury Chaykin instead. Book forewords Lifeson has penned forewords to four books: Behind the Stage Door by Rich Engler in 2013; Shredders!: The Oral History Of Speed Guitar (And More) by Greg Prato in 2017; Geddy Lee's Big Beautiful Book of Bass by Geddy Lee in 2018; and Domenic Troiano: His Life and Music by Mark Doble and Frank Troiano in 2021. Legal issues On New Year's Eve 2003, Lifeson, his son and his daughter-in-law were arrested at the Ritz-Carlton hotel in Naples, Florida. Lifeson, after intervening in an altercation between his son and police, was accused of assaulting a sheriff's deputy in what was described as a drunken brawl. In addition to suffering a broken nose at the hands of the officers, Lifeson was tased six times. His son was also tased repeatedly. On 21 April 2005, Lifeson and his son agreed to a plea deal with the local prosecutor for the State's Attorney office to avoid jail time by pleading no contest to a first-degree misdemeanor charge of resisting arrest without violence. As part of the plea agreement, Lifeson and his son were each sentenced to 12 months of probation with the adjudication of that probation suspended. Lifeson acknowledged his subsequent legal action against both the Ritz-Carlton and the Collier County Sheriff's Office for "their incredibly discourteous, arrogant and aggressive behaviour of which I had never experienced in 30 years of travel". Although both actions were initially dismissed in April 2007, legal claims against the Ritz-Carlton were reinstated upon appeal and they were settled out of court on a confidential basis in August 2008. In his journal-based book Roadshow: Landscape with Drums – A Concert Tour by Motorcycle, Peart relates the band's perspective on the events of that New Year's Eve. Guitar equipment Early Rush (1970s) In Rush's early career, Lifeson used a Gibson ES-335 for the first tour, and in 1976 bought a 1974 Gibson Les Paul; he used those two guitars until the late 1970s. He had a Fender Stratocaster with a Bill Lawrence humbucker and Floyd Rose vibrato bridge as backup "and for a different sound." For the A Farewell to Kings sessions, Lifeson began using a Gibson EDS-1275 for songs like "Xanadu" and his main guitar became a white Gibson ES-355. During this period Lifeson used Hiwatt amplifiers. He played a twelve-string Gibson B-45 on songs like "Closer to the Heart." 1980s and 1990s From 1980 to 1986, Lifeson used four identically modified Stratocasters, all of them equipped with the Floyd Rose bridge. As a joke, he called these Hentor Sportscasters – a made-up name inspired by Peter Henderson's name, who was the producer of Grace Under Pressure. He would start using them again twenty years later. He also played a Gibson Howard Roberts Fusion and an Ovation Adamas acoustic/electric guitar. By 1987, Lifeson switched to Signature guitar despite describing them as "awful to play—very uncomfortable--...had a particular sound I liked." Lifeson primarily used PRS guitars in the later-half of the 1990 Presto tour, and again during the recording of Roll The Bones in 1990/1991. He would continue to play PRS for the next sixteen years through the recording and touring of Counterparts, Test for Echo and Vapor Trails as well as the R30 tour. During this period, he also played several Fender Telecasters. 2000s onward: Return to Gibson guitars In 2011, Lifeson said that for the past few years he "used Gibson almost exclusively. There's nothing like having a low-slung Les Paul over my shoulder." Gibson "Alex Lifeson Axcess" In early 2011, Gibson introduced the "Alex Lifeson Axcess", a guitar specially designed for him. These are custom made Les Pauls with Floyd Rose tremolo systems and piezoacoustic pick-ups. He used these two custom Les Pauls on the Time Machine Tour. These guitars are also available through Gibson, in a viceroy Brown or Crimson colour. Lifeson used these two guitars heavily on the tour. For the 2012-2013 Clockwork Angels tour, Gibson built an Alex Lifeson Axcess model in black which became Lifeson's primary guitar for much of the show. For all acoustic work, he played one of his Axcess guitars using the piezo pick-ups; no acoustic guitars were used at all in the Clockwork Angels show. Paul Reed Smith acoustic signature guitar For the 2015 R40 Tour, Lifeson used his signature acoustic guitar model by Paul Reed Smith. The guitar is currently available for private stock order. Gibson R40 Signature Les Paul Axcess Gibson introduced an Alex Lifeson R40 Les Paul Axcess signature guitar in June 2015. This is a limited edition with 50 guitars signed and played by Lifeson, and another 250 available without the signature. Gibson Custom Alex Lifeson Signature ES Les Paul semi-hollow At the 2017 Winter NAMM Show, Gibson representative Mike Voltz introduced an Antique White Gibson Custom Alex Lifeson Signature ES Les Paul semi-hollow guitar, a hybrid of a Les Paul Custom & an ES 335, with only 200 made. Mike also introduced the Antique White as a new color from Gibson for this Custom (note: Gibson names this color as 'Classic White' on their web site which may be an error due to other Gibson reps labeling it as Antique White). Alex played this Custom on the last Rush tour. Amplification In 2005, Hughes & Kettner introduced an Alex Lifeson signature series amplifier; Lifeson donates his royalties from the sale of these signature models to UNICEF. In 2012, Lifeson abandoned his signature Triamps in favour of custom-built Lerxst Omega Silver Jubilee clones, handmade by Mojotone in Burgaw, NC and Mesa/Boogie Mark V heads. He still uses the Hughes & Kettner Coreblades. Effects For effects, Lifeson is known to use chorus, phase shifting, delay and flanging. Throughout his career, he has used well-known pedals such as the Echoplex delay pedal, Electro-Harmonix Electric Mistress flanger, the BOSS CE-1 chorus and the Dunlop crybaby wah, among others. Lifeson and his guitar technician Scott Appleton have discussed in interviews Lifeson's use of Fractal Audio's Axe-FX, Apple Inc.'s MainStage, and Native Instruments' Guitar Rig. Other instruments played Stringed instruments In addition to acoustic and electric guitars, Lifeson has also played mandola, mandolin and bouzouki on some Rush studio albums, including Test for Echo, Vapor Trails and Snakes & Arrows. For his Victor project and Little Drummer Boy for the Merry Axemas album, he also played bass and programmed synthesizers. Electronic instruments During live Rush performances, Lifeson used MIDI controllers that enabled him to use his free hands and feet to trigger sounds from digital samplers and synthesizers, without taking his hands off his guitar. (Prior to this, Lifeson used Moog Taurus Bass Pedals before they were replaced by Korg MIDI pedals in the 1980s.) Lifeson and his bandmates shared a desire to accurately depict songs from their albums when playing live performances. Toward this goal, beginning in the late 1980s the band equipped their live performances with a capacious rack of samplers. The band members used these samplers in real-time to recreate the sounds of non-traditional instruments, accompaniments, vocal harmonies, and other sound "events" that are familiarly heard on the studio versions of the songs. In live performances, the band members shared duties throughout most songs, with each member triggering certain sounds with his available limbs, while playing his primary instrument(s). Influence Many guitarists have cited Lifeson as an influence, such as Paul Gilbert of Mr. Big, John Petrucci of Dream Theater, Steven Wilson of Porcupine Tree, Jim Martin of Faith No More, Denis "Piggy" D'Amour of Voivod, Parris Mayhew formerly of Cro-Mags, and John Wesley. James Hetfield from Metallica named Lifeson one of the best rhythm guitarists of all time. Marillion guitarist Steve Rothery has expressed his admiration for Lifeson's "dexterity" as a live performer and described Rush as a "fantastic live band". Jazz guitarist Kurt Rosenwinkel, after citing him as an influence, praised his "incredible sound and imagination". Awards and honours "Best Rock Talent" by Guitar for the Practicing Musician in 1983 "Best Rock Guitarist" by Guitar Player Magazine in 1984 and May 2008 Runner-up for "Best Rock Guitarist" in Guitar Player in 1982, 1983, 1985, 1986 Inducted into the Guitar for the Practicing Musician Hall of Fame, 1991 1996 – Officer of the Order of Canada, along with bandmates Geddy Lee and Neil Peart 2007 – Main belt asteroid "(19155) Lifeson" named after Alex Lifeson "Best Article" for "Different Strings" in Guitar Player (September 2007 issue). Most Ferociously Brilliant Guitar Album (Snakes & Arrows) – Guitar Player Magazine'', May 2008 2013 – With Rush, Rock and Roll Hall of Fame inductee Discography With Rush Solo With Envy of None Following Rush's dissolution in 2018 and Neil Peart's death in 2020, Lifeson formed the supergroup Envy of None with himself on guitar, mandola and banjo, Alfio Annibalini on guitar and keyboards, Andy Curran on bass, guitar and backing vocals and Maiah Wynne on lead vocals and keyboards. Collaborations Appearances References External links Official website Audio-Technica interview with Alex Read 2002 CNN interview with Alex Order of Canada citation Lerxst Amplification 1953 births Living people People from the Regional District of East Kootenay Musicians from British Columbia Canadian people of Serbian descent Officers of the Order of Canada Canadian atheists Canadian male film actors Canadian male television actors Canadian male voice actors Canadian rock guitarists Canadian male guitarists Canadian heavy metal guitarists Lead guitarists Progressive rock guitarists Rush (band) members Anthem Records artists Anthem Records Big Dirty Band members Envy of None members 20th-century Canadian guitarists 21st-century Canadian guitarists
2900
https://en.wikipedia.org/wiki/File%20archiver
File archiver
A file archiver is a computer program that combines a number of files together into one archive file, or a series of archive files, for easier transportation or storage. File archivers may employ lossless data compression in their archive formats to reduce the size of the archive. Basic archivers just take a list of files and concatenate their contents sequentially into archives. The archive files need to store metadata, at least the names and lengths of the original files, if proper reconstruction is possible. More advanced archivers store additional metadata, such as the original timestamps, file attributes or access control lists. The process of making an archive file is called archiving or packing. Reconstructing the original files from the archive is termed unarchiving, unpacking or extracting. History An early archiver was the Multics command archive, descended from the CTSS command of the same name, which was a basic archiver and performed no compression. Multics also had a "tape_archiver" command, abbreviated ta, which was perhaps the forerunner of the Unix command tar. Unix archivers The Unix tools ar, tar, and cpio act as archivers but not compressors. Users of the Unix tools use additional compression tools, such as gzip, bzip2, or xz, to compress the archive file after packing or remove compression before unpacking the archive file. The filename extensions are successively added at each step of this process. For example, archiving a collection of files with tar and then compressing the resulting archive file with gzip results a file with .tar.gz extension. This approach has two goals: It follows the Unix philosophy that each program should accomplish a single task to perfection, as opposed to attempting to accomplish everything with one tool. As compression technology progresses, users may use different compression programs without having to modify or abandon their archiver. The archives use solid compression. When the files are combined, the compressor can exploit redundancy across several archived files and achieve better compression than a compressor that compresses each files individually. This approach, however, has disadvantages too: Extracting or modifying one file is difficult. Extracting one file requires decompressing an entire archive, which can be time- and space-consuming. Modifying one means the file needs to be put back into archive and the archive recompressed again. This operation requires additional time and disk space. The archive becomes damage-prone. If the area holding shared data for several files is damaged, all those files are lost. It is impossible to take advantage of redundancy between files unless the compression window is larger than the size of an individual file. For example, gzip uses DEFLATE, which typically operates with a 32768-byte window, whereas bzip2 uses a Burrows–Wheeler transform roughly 27 times bigger. xz defaults to 8 MiB but supports significantly larger windows. Windows archivers The built-in archiver of Microsoft Windows as well as third-party archiving software, such as WinRAR and 7-zip, often use a graphical user interface. They also offer an optional command-line interface, while Windows itself does not. Windows archivers perform both archiving and compression. Solid compression may or may not be offered, depending on the product: Windows itself does not support it; WinRAR and 7-zip offer it as an option that can be turned on or off. See also Comparison of file archivers Archive format List of archive formats Comparison of archive formats References External links Computer storage systems Computer file systems Computer archives Utility software types
2905
https://en.wikipedia.org/wiki/Artemis
Artemis
In ancient Greek religion and mythology, Artemis (; ) is the goddess of the hunt, the wilderness, wild animals, nature, vegetation, childbirth, care of children, and chastity. In later times, in some places, she was identified with Selene, the personification of the Moon. She often roamed the forests of Greece, attended by her large entourage, mostly made up of nymphs, some mortals, and hunters. The goddess Diana is her Roman equivalent. In Greek tradition, Artemis is the daughter of Zeus and Leto, and the twin sister of Apollo. In most accounts, the twins are the products of an extramarital liaison. For this, Zeus' wife Hera forbade Leto from giving birth anywhere on land. Only the island of Delos gave refuge to Leto, allowing her to give birth to her children. Usually, Artemis is the twin to be born first, who then proceeds to assist Leto in the birth of the second child, Apollo. Like her brother, she was a kourotrophic (child-nurturing) deity, that is the patron and protector of young children, especially young girls, and women, and was believed to both bring disease upon women and children and relieve them of it. Artemis was worshipped as one of the primary goddesses of childbirth and midwifery along with Eileithyia and Hera. Much like Athena and Hestia, Artemis preferred to remain a maiden goddess and was sworn never to marry, so was one of the three Greek virgin goddesses, over whom the goddess of love and lust, Aphrodite, had no power whatsoever. In myth and literature, Artemis is presented as a hunting goddess of the woods, surrounded by her followers, who are not to be crossed. In the myth of Actaeon, when the young hunter sees her bathing naked, he is transformed into a deer by the angered goddess and is then devoured by his own hunting dogs, who do not recognize their master. In the story of Callisto, the girl is driven away from Artemis' company after breaking her vow of virginity, having lain with and been impregnated by Zeus. In the Epic tradition, Artemis halted the winds blowing the Greek ships during the Trojan War, stranding the Greek fleet in Aulis, after King Agamemnon, the leader of the expedition, shot and killed her sacred deer. Artemis demanded the sacrifice of Iphigenia, Agamemnon's young daughter, as compensation for her slain deer. In most versions, when Iphigenia is led to the altar to be offered as a sacrifice, Artemis pities her and takes her away, leaving another deer in her place. In the war that followed, Artemis, along with her twin brother and mother, supported the Trojans against the Greeks, and challenged Hera into battle. Artemis was one of the most widely venerated of the Ancient Greek deities; her worship spread throughout ancient Greece, with her multiple temples, altars, shrines, and local veneration found everywhere in the ancient world. Her great temple at Ephesus was one of the Seven Wonders of the Ancient World, before it was burnt to the ground. Artemis' symbols included a bow and arrow, a quiver, and hunting knives, and the deer and the cypress were sacred to her. Diana, her Roman equivalent, was especially worshipped on the Aventine Hill in Rome, near Lake Nemi in the Alban Hills, and in Campania. Etymology The name "Artemis" (n., f.) is of unknown or uncertain etymology, although various sources have been proposed. R. S. P. Beekes suggested that the e/i interchange points to a Pre-Greek origin. Artemis was venerated in Lydia as Artimus. Georgios Babiniotis, while accepting that the etymology is unknown, also states that the name is already attested in Mycenean Greek and is possibly of pre-Greek origin. The name may be related to Greek árktos "bear" (from PIE *h₂ŕ̥tḱos), supported by the bear cult the goddess had in Attica (Brauronia) and the Neolithic remains at the Arkoudiotissa Cave, as well as the story of Callisto, which was originally about Artemis (Arcadian epithet kallisto); this cult was a survival of very old totemic and shamanistic rituals and formed part of a larger bear cult found further afield in other Indo-European cultures (e.g., Gaulish Artio). It is believed that a precursor of Artemis was worshipped in Minoan Crete as the goddess of mountains and hunting, Britomartis. While connection with Anatolian names has been suggested, the earliest attested forms of the name Artemis are the Mycenaean Greek , a-te-mi-to /Artemitos/ (gen.) and , a-ti-mi-te /Artimitei/ (dat.), written in Linear B at Pylos. According to J. T. Jablonski, the name is also Phrygian and could be "compared with the royal appellation Artemas of Xenophon. Charles Anthon argued that the primitive root of the name is probably of Persian origin from *arta, *art, *arte, all meaning "great, excellent, holy", thus Artemis "becomes identical with the great mother of Nature, even as she was worshipped at Ephesus". Anton Goebel "suggests the root στρατ or ῥατ, "to shake", and makes Artemis mean the thrower of the dart or the shooter". Ancient Greek writers, by way of folk etymology, and some modern scholars, have linked Artemis (Doric Artamis) to , artamos, i.e. "butcher" or, like Plato did in Cratylus, to , artemḗs, i.e. "safe", "unharmed", "uninjured", "pure", "the stainless maiden". A. J. Van Windekens tried to explain both and Artemis from , atremḗs, meaning "unmoved, calm; stable, firm" via metathesis. Description Artemis was the most popular goddess in Ancient Greece. The most frequent name of a month in the Greek calendars was Artemision in Ionia, Artemisios or Artamitios in the Doric and Aeolic territories and in Macedonia. Also Elaphios in Elis, Elaphebolion in Athens, Iasos, Apollonia of Chalkidice and Munichion in Attica. In the calendars of Aetolia, Phocis and Gytheion there was the month Laphrios and in Thebes, Corcyra, and Byzantion the month Eucleios. The goddess was venerated in festivals during spring. Artemis is presented as a goddess who delights in hunting and punishes harshly those who cross her. Artemis' wrath is proverbial, and represents the hostility of wild nature to humans. Homer calls her , "the mistress of animals", a title associated with representations in art going back as far as the Bronze Age, showing a woman between a pair of animals. Artemis carries with her certain functions and characteristics of a Minoan form whose history was lost in the myths. In some cults she retains the theriomorphic form of a Pre-Greek goddess who was conceived with the shape of a bear (arktos: bear). Kallisto in Arcadia is a hypostasis of Artemis with the shape of a bear, and her cults at Brauron and at Piraeus (Munichia) are remarkable for the arkteia where virgin girls before marriage were disguised as she-bears. The ancient Greeks called potnia theron the representation of the goddess between animals; on a Greek vase from circa 570 BCE, a winged Artemis stands between a spotted panther and a deer. "Potnia theron" is very close to the daimons and this differentiates her from the other Greek divinities. This is the reason that Artemis was later identified with Hecate, since the daimons were tutelary deities. Hecate was the goddess of crossroads and she was the queen of the witches. Laphria is the Pre-Greek "mistress of the animals" at Delphi and Patras. There was a custom to throw animals alive into the annual fire of the fest. The festival at Patras was introduced from Calydon and this relates Artemis to the Greek heroine Atalanta who symbolizes freedom and independence. Other epithets that relate Artemis to the animals are Amarynthia and Kolainis. In the Homeric poems Artemis is mainly the goddess of hunting, because it was the most important sport in Mycenean Greece. An almost formulaic epithet used in the Iliad and Odyssey to describe her is iocheaira, "she who shoots arrows", often translated as "she who delights in arrows" or "she who showers arrows". She is called Artemis Chrysilakatos, of the golden shafts, or Chrysinios, of the golden reins, as a goddess of hunting in her chariot. The Homeric Hymn 27 to Artemis paints this picture of the goddess: According to the beliefs of the first Greeks in Arcadia Artemis is the first nymph, a goddess of free nature. She is an independent free woman, and she doesn't need any partner. She is hunting surrounded by her nymphs. This idea of freedom and women's skill is expressed in many Greek myths. In Peloponnese the temples of Artemis were built near springs, rivers and marshes. Artemis was closely related to the waters and especially to Poseidon, the god of the waters. Her common epithets are Limnnaia, Limnatis (relation to waters) and Potamia and Alphaea (relation to rivers). In some cults she is the healer goddess of women with the surnames Lousia and Thermia. Artemis is the leader of the nymphs (Hegemone) and she is hunting surrounded by them. The nymphs appear during the festival of the marriage, and they are appealed by the pregnant women. Artemis became goddess of marriage and childbirth. She was worshipped with the surname Eucleia in several cities. Women consecrated clothes to Artemis for a happy childbirth and she had the epithets Lochia and Lecho. The Dorians interpreted Artemis mainly as goddess of vegetation who was worshipped in an orgiastic cult with lascivious dances, with the common epithets Orthia, Korythalia and Dereatis. The female dancers wore masks and were famous in antiquity. The goddess of vegetation was also related to the tree-cult with temples near the holy trees and the surnames Apanchomene, Caryatis and Cedreatis. According to Greek beliefs the image of a god or a goddess gave signs or tokens and had divine and magic powers. With these conceptions she was worshipped as Tauria (the Tauric, goddess), Aricina (Italy) and Anaitis (Lydia). In the bucolic (pastoral) songs the image of the goddess was discovered in bundles of leaves or dry sticks and she had the surnames Lygodesma and Phakelitis. In the European folklore, a wild hunter is chasing an elfish woman who falls in the water. In the Greek myths the hunter is chasing a female deer (doe) and both disappear into the waters. In relation to these myths Artemis was worshipped as Saronia and Stymphalia . The myth of a goddess who is chased and then falls in the sea is related to the cults of Aphaea and Diktynna. Artemis carrying torches was identified with Hecate and she had the surnames Phosphoros and Selasphoros . In Athens and Tegea, she was worshipped as Artemis Kalliste, "the most beautiful". Sometimes the goddess had the name of an Amazon like Lyceia (with a helmet of a wolf-skin) and Molpadia. The female warriors Amazons embody the idea of freedom and women's independence. In spite of her status as a virgin who avoided potential lovers, there are multiple references to Artemis' beauty and erotic aspect; in the Odyssey, Odysseus compares Nausicaa to Artemis in terms of appearance when trying to win her favor, Libanius, when praising the city of Antioch, wrote that Ptolemy was smitten by the beauty of (the statue of) Artemis; whereas her mother Leto often took pride in her daughter's beauty. She has several stories surrounding her where men such as Actaeon, Orion, and Alpheus tried to couple with her forcibly, only to be thwarted or killed. Ancient poets note Artemis' height and imposing stature, as she stands taller and more impressive than all the nymphs accompanying her. Epithets and functions Artemis is rooted to the less developed personality of the Mycenean goddess of nature. The goddess of nature was concerned with birth and vegetation and had certain chthonic aspects. The Mycenean goddess was related to the Minoan mistress of the animals, who can be traced later in local cults, however we don't know to what extent we can differentiate the Minoan from the Mycenean religion. Artemis carries with her certain functions and characteristics of a Minoan form whose history was lost in the myths. According to the beliefs of the first Greeks in Arcadia, Artemis is the first nymph, a divinity of free nature. She was a great goddess and her temples were built near springs marshes and rivers where the nymphs live, and they are appealed by the pregnant women. In Greek religion we must see less tractable elements which have nothing to do with the Olympians, but come from an old, less organized world–exorcisms, rituals to raise crops, gods and goddesses conceived not quite in human shape. Some cults of Artemis retained the pre-Greek features which were consecrated by immemorial practices and connected with daily tasks. Artemis shows sometimes the wild and darker side of her character and can bring immediate death with her arrows, however she embodies the idea of "the free nature" which was introduced by the first Greeks. The Dorians came later in the area, probably from Epirus and the goddess of nature was mostly interpreted as a vegetation goddess who was related to the ecstatic Minoan tree-cult. She was worshipped in orgiastic cults with lascivious and sometimes obscene dances, which have pure Greek elements introduced by the Dorians. The feminine (sometimes male) dancers wore usually masks, and they were famous in the antiquity. The great popularity of Artemis corresponds to the Greek belief in freedom and she is mainly the goddess of women in a patriarchal society. The goddess of free nature is an independent woman and doesn't need a partner. Artemis is frequently depicted carrying a torch and she was occasionally identified with Hecate. Like other Greek deities, she had a number of other names applied to her, reflecting the variety of roles, duties, and aspects ascribed to the goddess. Aeginaea, probably huntress of chamois or the wielder of the javelin, at Sparta However the word may mean "from the island Aegina", that relates Artemis with Aphaia (Britomartis). Aetole, of Aetolia at Nafpaktos. A marble statue represented the goddess in the attitude of one hurling a javelin. Agoraea, guardian of popular assemblies in Athens. She was considered to be the protector of the assemblies of the people in the agora. At Olympia the cult of "Artemis Agoraea" was related to the cult of Despoinai. (The double named goddesses Demeter and Persephone). Agrotera, the huntress of wild wood, in the Iliad and many cults. It was believed that she first hunted at Agrae of Athens after her arrival from Delos. There was a custom of making a "slaughter sacrifice", to the goddess before a battle. The deer always accompanies the goddess of hunting. Her epithet Agraea is similar with Agrotera. Alphaea, in the district of Elis. The goddess had an annual festival at Olympia and a temple at Letrinoi near the river Alpheus. At the festival of Letrinoi, the girls were dancing wearing masks. In the legend, Alphaea and her nymphs covered their faces with mud and the river god Alpheus, who was in love with her, could not distinguish her from the others. This explains, somehow, the clay masks at Sparta. Amarynthia, or Amarysia, with a famous temple at Amarynthus near Eretria. The goddess was related to the animals, however she was also a healer goddess of women. She is identified with Kolainis. Amphipyros, with fire at each end, a rare epithet of Artemis as bearing a torch in either hand. Sophocles calls her, "Elaphebolos, (deer slayer) Amphipyros", reminding the annual fire of the festival Laphria The adjective refers also to the twin fires of the two peaks of the Mount Parnassus above Delphi (Phaedriades). Anaitis, in Lydia. The fame of Tauria (the Tauric goddess) was very high, and the Lydians claimed that the image of the goddess was among them. It was considered that the image had divine powers. The Athenians believed that the image became booty to the Persians and was carried from Brauron to Susa. Angelos, messenger, envoy, title of Artemis at Syracuse in Sicily. Apanchomene, the strangled goddess, at Caphyae in Arcadia. She was a vegetation goddess related to the ecstatic tree cult. The Minoan tree goddesses Helene, Dentritis, and Ariadne were also hanged. This epithet is related to the old traditions where icons and puppets of a vegetation goddess would be hung on a tree. It was believed that the plane tree near the spring at Caphyae, was planted by Menelaus, the husband of Helen of Troy. The tree was called "Menelais". The previous name of the goddess was most likely Kondyleatis. Aphaea, or Apha, unseen or disappeared, a goddess at Aegina and a rare epithet of Artemis. Aphaea is identified with Britomartis. In the legend Britomartis (the sweet young woman) escaped from Minos, who fell in love with her. She travelled to Aegina on a wooden boat and then she disappeared. The myth indicates an identity in nature with Diktynna. Aricina, derived from the town Aricia in Latium, or from Aricia, the wife of the Roman forest god Virbius (Hippolytus). The goddess was related with Artemis Tauria (the Tauric Artemis). Her statue was considered the same with the statue that Orestes brought from Tauris. Near the sanctuary of the goddess there was a combat between slaves who had run away from their masters and the prize was the priesthood of Artemis. Ariste, the best, a goddess of the women. Pausanias describes xoana of "Ariste" and "Kalliste" in the way to the academy of Athens and he believes that the names are surnames of the goddess Artemis, who is depicted carrying a torch. Kalliste is not related to Kalliste of Arcadia. Aristobule, the best advisor, at Athens. The politician and general Themistocles built a temple of Artemis Aristobule near his house in the deme of Melite, in which he dedicated his own statue. Astrateias, she that stops an invasion, at Pyrrichos in Laconia. A wooden image (xoanon), was dedicated to the goddess, because she stopped the invasion of the Amazons in this area. Another xoanon represented "Apollo Amazonios". Basileie, at Thrace and Paeonia. The women offered wheat stalks to the goddess. In this cult, which reached Athens, Artemis is relative to the Thracian goddess Bendis. Brauronia, worshipped at Brauron in Attica. Her cult is remarkable for the "arkteia", young girls who dressed with short saffron-yellow chitons and imitated bears (she-bears: arktoi). In the Acropolis of Athens, the Athenian girls before puberty should serve the goddess as "arktoi". Artemis was the goddess of marriage and childbirth. The name of the small "bears" indicate the theriomorphic form of Artemis in an old pre-Greek cult. In the cult of Baubronia, the myth of the sacrifice of Iphigenia was represented in the ritual. Boulaia, of the council, in Athens. Boulephoros, counselling, advising, at Miletus, probably a Greek form of the mother-goddess. Caryatis, the lady of the nut-tree, at Caryae on the borders between Laconia and Arcadia. Artemis was strongly related to the nymphs, and young girls were dancing the dance Caryatis. The dancers of Caryai were famous in antiquity. In a legend, Carya, the female lover of Dionysos was transformed into a nut tree and the dancers into nuts. The city is considered to be the place of the origin of the bucolic (pastoral) songs. Cedreatis, near Orchomenus in Arcadia. A xoanon was mounted on the holy cedar (kedros). Chesias, from the name of a river at Samos. Chitonia, wearing a loose tunic, at Syracuse in Sicily, as goddess of hunting. The festival was distinguished by a peculiar dance and by a music on the flute. Chrisilakatos, of the golden arrow, in Homer's Iliad as a powerful goddess of hunting. In the Odyssey, she descends from a peak and travels along the ridges of Mount Erymanthos, that was sacred to the "Mistress of the animals". In a legend, when the old goddess became wrathful, she would send the terrible Erymanthian boar to lay waste to fields. Artemis can bring an immediate death with her arrows. In the Iliad, Hera stresses the wild and darker side of her character and she accuses her of being "a lioness between women". Chrisinios, of the golden reins, as a goddess of hunting in her chariot. In the Iliad, in her wrath, she kills the daughter of Bellerophon. Coryphaea, of the peak, at Epidaurus in Argolis. On the top of the mountain Coryphum there was a sanctuary of the goddess. The famous lyric poet Telesilla mentions "Artemis Coryphaea" in an ode. Cnagia, near Sparta in Laconia. In a legend the native Cnageus was sold as a slave in Crete. He escaped to his country taking with him the virgin priestess of the goddess Artemis. The priestess carried with her from Crete the statue of the goddess, who was named Cnagia. Cynthia, as goddess of the moon, from her birthplace on Mount Cynthos at Delos. Selene, the Greek personification of the moon, and the Roman Diana were also sometimes called Cynthia. Daphnaea, as goddess of vegetation. Her name is most likely derived from the "laurel-branch" which was used as "May-branch", or an allusion to her statue being made of laurel-wood (daphne) Strabo refers to her annual festival at Olympia. Delia, the feminine form of Apollo Delios Delphinia, the feminine form of Apollo Delphinios (literally derived from Delphi). Dereatis, at Sparta near Taygetos. Dancers were performing the obscene dance "kallabis". Diktynna, from Mount Dikti, who is identified with the Minoan goddess Britomartis. Her name is derived from the mountain Dikti in Crete. A folk etymology derives her name from the word "diktyon" (net). In the legend Britomartis (the sweet young woman) was hunting together with Artemis who loved her desperately. She escaped from Minos, who fell in love with her, by jumping into the sea and falling into a net of fishes. Eileithyia, goddess of childbirth in Boeotia and other local cults especially in Crete and Laconia. During the Bronze Age, in the cave of Amnisos, she was related to the annual birth of the divine child. In the Minoan myth the child was abandoned by his mother and then he was nurtured by the powers of nature. Elaphia, goddess of hunting (deer). Strabo refers to her annual festival at Olympia. Elaphebolos, shooter of deer, with the festival "Elaphebolia" at Phocis and Athens, and the name of a month in several local cults. Sophocles calls Artemis "Elaphebolos, Amphipyros", carrying a torch in each hand. This was used during the annual fire of the festival of Laphria at Delphi. Ephesia, at the city Ephesus of Minor Asia. The city was a great center of the cult of the goddess, with a magnificent temple, (Artemision). Ephesia belongs to the series of the Anatolian goddesses (Great mother, or mountain-mother). However she is not a mother-goddess, but the goddess of free nature. In the Homeric Ionic sphere she is the goddess of hunting. Eucleia, as a goddess of marriage in Boeotia, Locris and other cities. Epheboi and girls who wanted to marry should make a preliminary sacrifice in honour of the goddess. "Eukleios" was the name of a month in several cities and "Eucleia" was the name of a festival at Delphi. In Athens Peitho, Harmonia and Eucleia can create a good marriage. The bride would sacrifice to the virgin goddess Artemis. Eupraxis, fine acting. On a relief from Sicily the goddess is depicted holding a torch in one hand and an offering on the other. The torch was used for the ignition of the fire on the altar. Eurynome, wide ruling, at Phigalia in Arcadia. Her wooden image (xoanon) was bound with a roller golden chain. The xoanon depicted a woman's upper body and the lower body of a fish. Pausanias identifies her as one of the Oceanids daughters of Oceanus and Tethys Hagemo, or Hegemone, leader, as the leader of the nymphs. Artemis was playing and dancing with the nymphs who lived near springs, waters and forests and she was hunting surrounded by them. The nymphs joined the festival of the marriage and then they returned to their original form. The pregnant women appealed to the nymphs for help. In Greek popular culture the commandress of the Neraiden (fairies) is called "Great lady", "Lady Kalo" or "Queen of the mountains". Heleia, related to the marsh or meadow in Arcadia, Messenia and Kos. Hemeresia, the soothing goddess worshipped at well Lusoi Heurippa, horse finder, at Pheneus in Arcadia. Her sanctuary was near the bronze statue of Poseidon Hippios (horse). In a legend, Odysseus lost his mares and travelled throughout Greece to find them. He found his mares at Pheneus, where he founded the temple of "Artemis Heurippa". Hymnia, at Orchomenos in Boeotia. She was a goddess of dance and songs, especially of female choruses. The priestesses of Artemis Hymnia couldn't have a normal life like the other women. They were at first virgins and were to remain celibate in the priesthood. They could not use the same baths and they were not allowed to enter the house of a private man. Iakinthotrophos, nurse of Hyacinthos at Knidos. Hyacinthos was a god of vegetation with Minoan origin. After his birth he was abandoned by his mother and then he was nurtured by Artemis who represents the first power of nature. Imbrasia, from the name of a river at Samos. Iocheaira, shooter of arrows by Homer (archer queen), as goddess of hunting. She has a wild character and Hera advises her to kill animals in the forest, instead of fighting with her superiors. Apollo and Artemis kill with their arrows the children of Niobe because she offended her mother Leto. In the European and Greek popular religion the arrow-shots from invisible beings can bring diseases and death. Issora, or Isora, at Sparta, with the surname Limnaia or Pitanitis. Issorium was a part of a great summit which advances into the level of Eurotas a Pausanias identifies her with the Minoan Britomartis. Kalliste, the most beautiful, another form of Artemis with the shape of a bear at Tricoloni near Megalopolis a mountainous area full of wild beasts. Kallisto the attendant of Artemis, bore Arcas the patriarch of the Arcaden. In a legend Kallisto was transformed into a bear and in another myth Artemis shot her. Kallisto is a hypostasis of Artemis with a theriomorphic form from a pre-Greek cult. 'Keladeini, echoing chasing (noisy) in Homer's Iliad because she hunts wild boars and deer surrounded by her nymphs. 'Kithone, as a goddess of childbirth at Millet. Her name is probably derived from the custom of clothes consecration to the goddess, for a happy childbirth. Kolainis, related with the animals at Euboea and Attica. At Eretria she had a major temple and she was called Amarysia. The goddess became a healer goddess of women. Kolias, in a cult of women. Men were excluded because the fertility of the earth was related to motherhood. Aristophanes mentions Kolias and Genetyllis who are accused for lack of restraint. Their cult had a very emotional character. Kondyleatis, named after the village Kondylea, where she had a grove and a temple. In a legend some boys tied a rope around the image of the goddess and said that Artemis was hanged. The boys were killed by the inhabitants and this caused a divine punishment. All the women brought dead children in the world, until the boys were honourably buried. An annual sacrifice was instituted to the divine spirits of the boys. Kondyleatis was most likely the original name of Artemis Apanchomeni. Kordaka, in Elis. Τhe dancers performed the obscene dance kordaka, which is considered the origin of the dance of the old comedy. The dance is famous for its nudge and hilarity and gave the name to the goddess. Korythalia, derived from Korythale, probably the "laurel May-branch", as a goddess of vegetation at Sparta. The epheboi and the girls who entered the marriage age placed the Korythale in front of the door of the house. In the cult the female dancers (famous in the antiquity) performed boisterous dances and were called Korythalistriai. In Italy, the male dancers wore wooden masks and they were called kyrritoi (pushing with the horns). Kourotrophos, protector of young boys. During the Apaturia the front hair of young girls and young boys (koureion) were offered to the goddess. Laphria, the mistress of the animals (Pre-Greek name) in many cults, especially in central Greece, Phocis and Patras. "Laphria" was the name of the festival. The characteristic rite was the annual fire and there was a custom to throw animals alive in the flames during the fest. The cult of "Laphria" at Patras was transferred from the city Calydon of Aetolia In a legend during the Calydonian boar hunt the fierce-huntress Atalanta was the first who wounded the boar. Atalanta was a Greek heroine, symbolizing the free nature and independence Lecho, protector of a woman in childbed, or of one who has just given birth. Leukophryene, derived from the city Leucophrys in Magnesia of Ionia. The original form of the cult of the goddess is unknown, however it seems that once the character of the goddess was similar with her character in Peloponnese. Limnaia, of the marsh, at Sparta, with a swimming place Limnaion. (λίμνη: lake). Limnatis, of the marsh and the lake, at Patras, Ancient Messene and many local cults. During the festival, the Messenian young ladies were violated. Cymbals have been found around the temple, indicating that the festival was celebrated with dances. Lochia, as goddess of childbirth and midwifery. Women consecrated clothes to the goddess for a happy childbirth. Other less common epithets of Artemis as goddess of childbirth are Eulochia and Geneteira. Lousia, bather or purifier, as a healer goddess at Lusoi in Arcadia, where Melampus healed the Proitiden. Lyaia, at Syracuse in Sicily. (Spartan colony). There is a clear influence from the cult of Artemis Caryatis in Laconia. The Sicilian songs were transformed songs from the Laconic bucolic (pastoral) songs at Caryai. Lyceia, of the wolf or with a helmet of a wolf skin, at Troezen in Argolis. It was believed that her temple was built by the hunter Hippolytus who abstained from sex and marriage. Lyceia was probably a surname of Artemis among the Amazons from whom Hippolytus descended from his mother. (Hippolyta). Lycoatis, with a bronze statue at the city Lycoa in Arcadia. The city was near the foot of the mountain Mainalo, which was sacred to, Pan. On the south slope the Mantineians fetched the bones of Arcas, the son of Kallisto.(Kalliste). Lygodesma, willow bound, at Sparta (another name of Orthia). In a legend her image was discovered in a thicket of willows. standing upright (orthia). Melissa, bee or beauty of nature, as a moon goddess. In Neoplatonic philosophy melissa is any pure being of souls coming to birth. The goddess took suffering away from mothers giving birth. It was Melissa who drew souls coming to birth. Molpadia, singer of divine songs, a rare epithet of Artemis as a goddess of dances and songs and leader of the nymphs. In a legend Molpadia was an Amazon. During the Attic war she killed Antiope to save her by the Athenian king Theseus, but she was killed by Theseus. Munichia, in a cult at Piraeus, related to the arkteia of Brauronian Artemis. According to legend, if someone killed a bear, he should be punished by sacrificing his daughter in the sanctuary. Embaros disguised his daughter by dressing her like a bear (arktos), and hid her in the adyton. He placed a goat on the altar and he sacrificed the goat instead of his daughter. Mysia, with a temple on the road from Sparta to Arcadia near the "Tomb of the Horse". Oenoatis, derived from the city Oenoe in Argolis. Above the town there was the mountain Artemisium, with the temple of the goddess on the summit. In a Greek legend the mountain was the place where Heracles chased and captured the terrible Ceryneian Hind, an enormous female deer with golden antlers and hooves of bronze. The deer was sacred to Artemis. Orthia, upright, with a famous festival at Sparta. Her cult was introduced by the Dorians. She was worshipped as a goddess of vegetation in an orgiastic cult with boisterous cyclic dances. Among the offerings, there were teracotta masks representing grotesque faces and it seems that animal-masks were also used. In literature there was a great fight for taking the pieces of cheese that were offered to the goddess. The whipping of the epheboi near the altar was a ritual of initiation, preparing them for their future life as soldiers. During this ritual the altar was full of blood. Paidotrophos, protector of children at Corone in Messenia. During a festival of Korythalia the wet-nurses brought the infants in the sanctuary of the goddess, to get her protection. Peitho, Persuasion, at the city Argos in Argolis. Her sanctuary was in the market place. In Pelopponnese Peitho is related to Artemis. In Athens Peitho is the consensual force in civilized society and emphasizes civic armony. Pergaia, who was worshipped at Pamphylia of Ionia. A famous annual festival was celebrated in honor of Artemis in the city Perga. Filial cults existed in Pisidia, north of Pamphylia. Pheraia, from the city Pherai, at Argos, Athens and Sicyon. It was believed that the image of the goddess was brought from the city Pherai of Thessaly. This conception relates Artemis with the distinctly Thessalian goddess Enodia. Enodia had similar functions with Hecate and she carried the common epithet "Pheraia". Phakelitis, of the bundle, at Tyndaris in Sicily. In the local legend the image of the goddess was found in a bundle of dry sticks. Phoebe, bright, as a moon goddess sister of Phoebus. The epithet Phoebe is also given to the moon goddess Selene. Phosphoros, carrier of light. In Ancient Messene she is carrying a torch as a moon-goddess and she is identified with Hecate. Polo, in Thasos, with inscriptions and statues from the Hellenistic and Roman period. The name is probably related to "parthenos" (virgin). Potamia, of the river, at Ortygia in Sicily. In a legend Arethusa, was a chaste nymph and tried to escape from the river god Alpheus who fell in love with her. She was transformed by Artemis into a stream, traversed underground and appeared at Ortygia, thus providing water for the city. Ovid calls Arethusa, "Alfeias" (Alfaea) (of the river god). Potnia Theron, mistress of the animals. The origin of her cult is Pre-Greek and the term is used by Homer for the goddess of hunting. Potnia was the name of the Mycenean goddess of nature. In the earliest Minoan conceptions the "Master of the animals" is depicted between lions and daimons (Minoan Genius). Sometimes "potnia theron" is depicted with the head of a Gorgon, who is her distant ancestor. She is the only Greek goddess who stands close to the daimons and she has a wild side which differentiates her from other Greek gods. In the Greek legends when the goddess was offended she would send terrible animals like the Erymanthian boar and Calydonian boar to laid waste the farmer's land, or voracious birds like the Stymphalian birds to attack farms and humans. In Arcadia and during the festival of Laphria, there is evidence of barbaric animal sacrifices. Pythia, as a goddess worshipped at Delphi. Saronia, of Saron, at Troezen across the Saronic gulf. In a legend the king Saron was chasing a doe that dashed into the sea. He followed the doe in the waters and he was drowned in the waves of the sea. He gave his name to the Saronic gulf. Selasphoros, carrier of light, flame, as a moon-goddess identified with Hecate, in the cult of Munichia at Piraeus. Soteira (Kore Soteira), Kore saviour, at Phigalia. In Arcadia the mistress of the animals is the first nymph closely related to the springs and the animals, in a surrounding of animal-headed daimons. At Lycosura Artemis is depicted holding a snake and a torch and dressed with a deer skin, besides Demeter and Persephone. It was said that she was not the daughter of Leto, but the daughter of Demeter. Stymphalia, of Stymphalus, a city in Arcadia. In a legend the water of the river descended in a chasm which was clogged up and the water overflowed creating a big marsh on the plain. A hunter was chasing a deer and both fell into the mud at the bottom of the chasm. The next day the whole water of the marsh dried up and the land was cultivated. The monstrous man eating Stymphalian birds that were killed by Heracles were considered birds of Artemis. Tauria, or Tauro (the Tauric goddess), from the Tauri or of the bull. Euripides mentions the image of "Artemis Tauria". It was believed that the image of the goddess had divine powers. Her image was considered to have been carried from Tauris by Orestes and Iphigenia and was brought to Brauron, Sparta or Aricia. Tauropolos, usually interpreted as hunting bull goddess. Tauropolos was not original in Greece and she has similar functions with foreign goddesses, especially with the mythical bull-goddess. The cult can be identified at Halae Araphenides in Attica. At the end of the peculiar festival, a man was sacrificed. He was killed in the ritual with a sword cutting his throat. Strabo mentions that during the night-fest of Tauropolia a girl was raped. Thermia, as a healer goddess at Lousoi in Arcadia, where Melampus healed the Proitiden. Toxia, or Toxitis, bowstring in torsion, as goddess of hunting in the island of Kos and at Gortyn. She is the sister of "Apollo Toxias". Triclaria, at Patras. Her cult was superimposed on the cult of Dionysos Aisemnetis. During the festival of the god the children were wearing garlands of corn-ears. In a ritual they laid them aside to the goddess Artemis. Triclaria was a priestess of Artemis who made love with her lover in the sanctuary. They were punished to be sacrificed in the temple and each year the people should sacrifice a couple to the goddess. Europylus came carrying a chest with the image of Dionysos who put an end to the killings. Mythology Birth Various conflicting accounts are given in Greek mythology regarding the birth of Artemis and Apollo, her twin brother. In terms of parentage, though, all accounts agree that she was the daughter of Zeus and Leto and that she was the twin sister of Apollo. In some sources, she is born at the same time as Apollo; but in others, earlier or later. Although traditionally stated to be twins, the author of The Homeric Hymn 3 to Apollo (the oldest extant account of Leto's wandering and birth of her children) is only concerned with the birth of Apollo, and sidelines Artemis; in fact in the Homeric Hymn they are not stated to be twins at all. It is a slightly later poet, Pindar, who speaks of a single pregnancy. The two earliest poets, Homer and Hesiod, confirm Artemis and Apollo's status as full siblings born to the same mother and father, but neither explicitly makes them twins. According to Callimachus, Hera, who was angry with her husband Zeus for impregnating Leto, forbade her from giving birth on either terra firma (the mainland) or on an island, but the island of Delos disobeyed and allowed Leto to give birth there. According to some, this rooted the once freely floating island to one place. According to the Homeric Hymn to Artemis, however, the island where she and her twin were born was Ortygia. In ancient Cretan history, Leto was worshipped at Phaistos, and in Cretan mythology, Leto gave birth to Apollo and Artemis on the islands known today as Paximadia. A scholium of Servius on Aeneid iii. 72 accounts for the island's archaic name Ortygia by asserting that Zeus transformed Leto into a quail (ortux) to prevent Hera from finding out about his infidelity, and Kenneth McLeish suggested further that in quail form, Leto would have given birth with as few birth-pains as a mother quail suffers when she lays an egg. The myths also differ as to whether Artemis was born first, or Apollo. Most stories depict Artemis as firstborn, becoming her mother's midwife upon the birth of her brother Apollo. Servius, a late fourth/early fifth-century grammarian, wrote that Artemis was born first because at first it was night, whose instrument is the Moon, which Artemis represents, and then day, whose instrument is the Sun, which Apollo represents. Pindar however writes that both twins shone like the Sun when they came into the bright light. After their troubling childbirth, Leto took the twin infants and crossed over to Lycia, in the southwest corner of Asia Minor, where she tried to drink from and bathe the babies in a spring she found there. However, the local Lycian peasants tried to prevent the twins and their mother from making use of the water by stirring up the muddy bottom of the spring, so the three of them could not drink it. Leto, in her anger that the impious Lycians had refused to offer hospitality to a fatigued mother and her thirsty infants, transformed them all into frogs, forever doomed to swim and hop around the spring. Childhood The childhood of Artemis is not fully related to any surviving myth. A poem by Callimachus to the goddess "who amuses herself on mountains with archery" imagines a few vignettes of a young Artemis. While sitting on the knee of her father, she asks him to grant her 10 wishes: to forever remain a virgin to have many names to set her apart from her brother Phoebus (Apollo) to have a bow and arrow made by the Cyclopes to be the Phaesporia or Light Bringer to have a short, knee-length tunic so she could hunt to have 60 "daughters of Okeanos", all nine years of age, to be her choir to have 20 Amnisides nymphs as handmaidens so they would watch over her hunting dogs and bow while she rested to rule over all the mountains to be assigned any city, and only to visit when called by birthing mothers to have the ability to help women in the pains of childbirth. Artemis believed she had been chosen by the Fates to be a midwife, particularly as she had assisted her mother in the delivery of her twin brother Apollo. All of her companions remained virgins, and Artemis closely guarded her own chastity. Her symbols included the golden bow and arrow, the hunting dog, the stag, and the moon. Callimachus then tells how Artemis spent her girlhood seeking out the things she would need to be a huntress, and how she obtained her bow and arrows from the isle of Lipara, where Hephaestus and the Cyclopes worked. While Oceanus' daughters were initially fearful, the young Artemis bravely approached and asked for a bow and arrows. He goes on to describe how she visited Pan, god of the forest, who gave her seven female and six male hounds. She then captured six golden-horned deer to pull her chariot. Artemis practiced archery first by shooting at trees and then at wild game. Relations with men The river god Alpheus was in love with Artemis, but as he realized he could do nothing to win her heart, he decided to capture her. When Artemis and her companions at Letrenoi go to Alpheus, she becomes suspicious of his motives and covers her face with mud so he does not recognize her. In another story, Alphaeus tries to rape Artemis' attendant Arethusa. Artemis pities the girl and saves her, transforming her into a spring in the temple Artemis Alphaea in Letrini, where the goddess and her attendant drink. Bouphagos, son of the Titan Iapetus, sees Artemis and thinks about raping her. Reading his sinful thoughts, Artemis strikes him down at Mount Pholoe. Daphnis was a young boy, a son of Hermes, who was accepted by and became a follower of the goddess Artemis; Daphnis would often accompany her in hunting and entertain her with his singing of pastoral songs and playing of the panpipes. Artemis taught a man, Scamandrius, how to be a great archer, and he excelled in the use of a bow and arrow with her guidance. Broteas was a famous hunter who refused to honour Artemis, and boasted that nothing could harm him, not even fire. Artemis then drove him mad, causing him to walk into fire, ending his life. According to Antoninus Liberalis, Siproites was a Cretan who was metamorphized into a woman by Artemis, for, while hunting, seeing the goddess bathing. Artemis changed a Calydonian man named Calydon, son of Ares and Astynome, into stone when he saw the goddess bathing naked. Divine retribution Actaeon Multiple versions of the Actaeon myth survive, though many are fragmentary. The details vary but at the core, they involve the great hunter Actaeon whom Artemis turns into a stag for a transgression, and who is then killed by hunting dogs. Usually, the dogs are his own, but no longer recognize their master. Occasionally they are said to be the hounds of Artemis. Various tellings diverge in terms of the hunter's transgression: sometimes merely seeing the virgin goddess naked, sometimes boasting he is a better hunter than she, or even merely being a rival of Zeus for the affections of Semele. Apollodorus, who records the Semele version, notes that the ones with Artemis are more common. According to Lamar Ronald Lacey's The Myth of Aktaion: Literary and Iconographic Studies, the standard modern text on the work, the most likely original version of the myth portrays Actaeon as the hunting companion of the goddess who, seeing her naked in her sacred spring, attempts to force himself on her. For this hubris, he is turned into a stag and devoured by his own hounds. However, in some surviving versions, Actaeon is a stranger who happens upon Artemis. A single line from Aeschylus's now lost play Toxotides ("female archers") is among the earlier attestations of Actaeon's myth, stating that "the dogs destroyed their master utterly", with no confirmation of Actaeon's metamorphosis or the god he offended (but it is heavily implied to be Artemis, due to the title). Ancient artwork depicting the myth of Actaeon predate Aeschylus. Euripides, coming in a bit later, wrote in the Bacchae that Actaeon was torn to shreds and perhaps devoured by his "flesh-eating" hunting dogs when he claimed to be a better hunter than Artemis. Like Aeschylus, he does not mention Actaeon being deer-shaped when that happens. Callimachus writes that Actaeon chanced upon Artemis bathing in the woods, and she caused him to be devoured by his own hounds for the sacrilege, and he makes no mention of transformation into a deer either. Diodorus Siculus wrote that Actaeon dedicated his prizes in hunting to Artemis, proposed marriage to her, and even tried to forcefully consummate said "marriage" inside the very sacred temple of the goddess; for this he was given the form "of one of the animals which he was wont to hunt", and then torn to shreds by his hunting dogs. Diodorus also mentioned the alternative of Actaeon claiming to be a better hunter than the goddess of the hunt. Hyginus also mentions Actaeon attempting to rape Artemis when he finds her bathing naked, and her transforming him into the doomed deer. Apollodorus wrote that when Actaeon saw Artemis bathing, she turned him into a deer on the spot, and intentionally drove his dogs into a frenzy so that they would kill and devour him. Afterward, Chiron built a sculpture of Actaeon to comfort his dogs in their grief, as they could not find their master no matter how much they looked for him. According to the Latin version of the story told by the Roman Ovid, Actaeon was a hunter who after returning home from a long day's hunting in the woods, he stumbled upon Artemis and her retinue of nymphs bathing in her sacred grotto. The nymphs, panicking, rushed to cover Artemis' naked body with their own, as Artemis splashed some water on Actaeon, saying he was welcome to share with everyone the tale of seeing her without any clothes as long as he could share it at all. Immediately, he was transformed into a deer, and in panic ran away. But he did not go far, as he was hunted down and eventually caught and devoured by his own fifty hunting dogs, who could not recognize their own master. Pausanias says that Actaeon saw Artemis naked and that she threw a deerskin on him so that his hounds would kill him, in order to prevent him from marrying Semele. Niobe The story of Niobe, queen of Thebes and wife of Amphion, who blasphemously boasted of being superior to Leto. This myth is very old; Homer knew of it and wrote that Niobe had given birth to twelve children, equally divided in six sons and six daughters (the Niobids). Other sources speak of fourteen children, seven sons, and seven daughters. Niobe claimed of being a better mother than Leto, for having more children than Leto's own two, "but the two, though they were only two, destroyed all those others." Leto was not slow to catch up on that and grew angry at the queen's hubris. She summoned her children and commanded them to avenge the slight against her. Swiftly Apollo and Artemis descended on Thebes. While the sons were hunting in the woods, Apollo crept up on them and slew all seven with his silver bow. The dead bodies were brought to the palace. Niobe wept for them, but did not relent, saying that even now she was better than Leto, for she still had seven children, her daughters. On cue, Artemis then started shooting the daughters one by one. Right as Niobe begged for her youngest one to be spared, Artemis killed that last one. Niobe cried bitter tears, and was turned into a rock. Amphion, at the sight of his dead sons, killed himself. The gods themselves entombed them. In some versions, Apollo and Artemis spared a single son and daughter each, for they prayed to Leto for help; thus Niobe had as many children as Leto did, but no more. Orion Orion was Artemis' hunting companion; after giving up on trying to find Oenopion, Orion met Artemis and her mother Leto, and joined the goddess in hunting. A great hunter himself, he bragged that he would kill every beast on earth. Gaia, the earth, was not too pleased to hear that, and sent a giant scorpion to sting him. Artemis then transferred him into the stars as the constellation Orion. In one version Orion died after pushing Leto out of the scorpion's way. In another version, Orion tries to violate Opis, one of Artemis' followers from Hyperborea, and Artemis kills him. In a version by Aratus, Orion grabs Artemis' robe and she kills him in self-defense. Other writers have Artemis kill him for trying to rape her or one of her attendants. Istrus wrote a version in which Artemis fell in love with Orion, apparently the only time Artemis ever fell in love. She meant to marry him, and no talk from her brother Apollo would change her mind. Apollo then decided to trick Artemis, and while Orion was off swimming in the sea, he pointed at him (barely a spot in the horizon) and wagered that Artemis could not hit that small "dot". Artemis, ever eager to prove she was the better archer, shot Orion, killing him. She then placed him among the stars. In Homer's Iliad, the goddess of the dawn Eos seduces Orion, angering the gods who did not approve of immortal goddesses taking mortal men for lovers, causing Artemis to shoot and kill him on the island of Ortygia. Callisto Callisto, the daughter of Lycaon, King of Arcadia, was one of Artemis' hunting attendants, and, as a companion of Artemis, took a vow of chastity. According to Hesiod in his lost poem Astronomia, Zeus appeared to Callisto, and seduced her, resulting in her becoming pregnant. Though she was able to hide her pregnancy for a time, she was soon found out while bathing. Enraged, Artemis transformed Callisto into a bear, and in this form she gave birth to her son Arcas. Both of them were then captured by shepherds and given to Lycaon, and Callisto thus lost her child. Sometime later, Callisto "thought fit to go into" a forbidden sanctuary of Zeus, and was hunted by the Arcadians, her son among them. When she was about to be killed, Zeus saved her by placing her in the heavens as a constellation of a bear. In his De Astronomica, Hyginus, after recounting the version from Hesiod, presents several other alternative versions. The first, which he attributes to Amphis, says that Zeus seduced Callisto by disguising himself as Artemis during a hunting session, and that when Artemis found out that Callisto was pregnant, she replied saying that it was the goddess's fault, causing Artemis to transform her into a bear. This version also has both Callisto and Arcas placed in the heavens, as the constellations Ursa Major and Ursa Minor. Hyginus then presents another version in which, after Zeus lay with Callisto, it was Hera who transformed her into a bear. Artemis later, while hunting, kills the bear, and "later, on being recognized, Callisto was placed among the stars". Hyginus also gives another version, in which Hera tries to catch Zeus and Callisto in the act, causing Zeus to transform her into a bear. Hera, finding the bear, points it out to Artemis, who is hunting; Zeus, in panic, places Callisto in the heavens as a constellation. Ovid gives a somewhat different version: Zeus seduced Callisto once again disguised as Artemis, but she seems to realise that it is not the real Artemis, and she thus does not blame Artemis when, during bathing, she is found out. Callisto is, rather than being transformed, simply ousted from the company of the huntresses, and she thus gives birth to Arcas as a human. Only later is she transformed into a bear, this time by Hera. When Arcas, fully grown, is out hunting, he nearly kills his mother, who is saved only by Zeus placing her in the heavens. In the Bibliotheca, a version is presented in which Zeus raped Callisto, "having assumed the likeness, as some say, of Artemis, or, as others say, of Apollo". He then turned her into a bear himself so as to hide the event from Hera. Artemis then shot the bear, either upon the persuasion of Hera, or out of anger at Callisto for breaking her virginity. Once Callisto was dead, Zeus made her into a constellation, took the child, named him Arcas, and gave him to Maia, who raised him. Pausanias, in his Description of Greece, presents another version, in which, after Zeus seduced Callisto, Hera turned her into a bear, which Artemis killed to please Hera. Hermes was then sent by Zeus to take Arcas, and Zeus himself placed Callisto in the heavens. Minor myths When Zeus' gigantic son Tityos tried to rape Leto, she called out to her children for help, and both Artemis and Apollo were quick to respond by raining down their arrows on Tityos, killing him. Chione was a princess of Phokis. She was beloved by two gods, Hermes and Apollo, and boasted that she was more beautiful than Artemis because she had made two gods fall in love with her at once. Artemis was furious and killed Chione with an arrow, or struck her mute by shooting off her tongue. However, some versions of this myth say Apollo and Hermes protected her from Artemis' wrath. Artemis saved the infant Atalanta from dying of exposure after her father abandoned her. She sent a female bear to nurse the baby, who was then raised by hunters. In some stories, Artemis later sent a bear to injure Atalanta because others claimed Atalanta was a superior hunter. Among other adventures, Atalanta participated in the Calydonian boar hunt, which Artemis had sent to destroy Calydon because King Oeneus had forgotten her at the harvest sacrifices. In the hunt, Atalanta drew the first blood and was awarded the prize of the boar's hide. She hung it in a sacred grove at Tegea as a dedication to Artemis. Meleager was a hero of Aetolia. King Oeneus ordered him to gather heroes from all over Greece to hunt the Calydonian boar. After the death of Meleager, Artemis turns his grieving sisters, the Meleagrids, into guineafowl that Artemis favoured. In Nonnus' Dionysiaca, Aura, the daughter of Lelantos and Periboia, was a companion of Artemis. When out hunting one day with Artemis, she asserts that the goddess's voluptuous body and breasts are too womanly and sensual, and doubts her virginity, arguing that her own lithe body and man-like breasts are better than Artemis' and a true symbol of her own chastity. In anger, Artemis asks Nemesis for help to avenge her dignity. Nemesis agrees, telling Artemis that Aura's punishment will be to lose her virginity, since she dared question that of Artemis. Nemesis then arranges for Eros to make Dionysus fall in love with Aura. Dionysus intoxicates Aura and rapes her as she lies unconscious, after which she becomes a deranged killer. While pregnant, she tries to kill herself or cut open her belly, as Artemis mocks her over it. When she bore twin sons, she ate one, while the other, Iacchus, was saved by Artemis. The twin sons of Poseidon and Iphimedeia, Otos and Ephialtes, grew enormously at a young age. They were aggressive and skilled hunters who could not be killed except by each other. The growth of the Aloadae never stopped, and they boasted that as soon as they could reach heaven, they would kidnap Artemis and Hera and take them as wives. The gods were afraid of them, except for Artemis who captured a fine deer that jumped out between them. In another version of the story, she changed herself into a doe and jumped between them. The Aloadae threw their spears and so mistakenly killed one another. In another version, Apollo sent the deer into the Aloadae's midst, causing their accidental killing of each other. In another version, they start pilling up mountains to reach Mount Olympus in order to catch Hera and Artemis, but the gods spot them and attack. When the twins had retreated the gods learnt that Ares has been captured. The Aloadae, not sure about what to do with Ares, lock him up in a pot. Artemis then turns into a deer and causes them to kill each other. In some versions of the story of Adonis, Artemis sent a wild boar to kill him as punishment for boasting that he was a better hunter than her. In other versions, Artemis killed Adonis for revenge. In later myths, Adonis is a favorite of Aphrodite, who was responsible for the death of Hippolytus, who had been a hunter of Artemis. Therefore, Artemis killed Adonis to avenge Hippolytus's death. In yet another version, Adonis was not killed by Artemis, but by Ares as punishment for being with Aphrodite. Polyphonte was a young woman who fled home in pursuit of a free, virginal life with Artemis, as opposed to the conventional life of marriage and children favoured by Aphrodite. As a punishment, Aphrodite cursed her, causing her to mate and have children with a bear. Artemis, seeing that, was disgusted and sent a horde of wild animals against her, causing Polyphonte to flee to her father's house. Her resulting offspring, Agrius and Oreius, were wild cannibals who incurred the hatred of Zeus. Ultimately the entire family was transformed into birds who became ill portents for mankind. Coronis was a princess from Thessaly who became the lover of Apollo and fell pregnant. While Apollo was away, Coronis began an affair with a mortal man named Ischys. When Apollo learnt of this, he sent Artemis to kill the pregnant Coronis, or Artemis had the initiative to kill Coronis on her own accord for the insult done against her brother. The unborn child, Asclepius, was later removed from his dead mother's womb. When two of her hunting companions who had sworn to remain chaste and be devoted to her, Rhodopis and Euthynicus, fell in love with each other and broke their vows in a cavern, Artemis turned Rhodopis into a fountain inside that very cavern as punishment. The two had fallen in love not on their own but only after Eros had struck them with his love arrows, commanded by his mother Aphrodite, who had taken offence in that Rhodopis and Euthynicus rejected love and marriage in favour of a chaste life. When the queen of Kos Echemeia ceased to worship Artemis, she shot her with an arrow; Persephone then snatched the still-living Euthemia and brought her to the Underworld. Trojan War Artemis may have been represented as a supporter of Troy because her brother Apollo was the patron god of the city, and she herself was widely worshipped in western Anatolia in historical times. Artemis plays a significant role in the war; like Leto and Apollo, Artemis took the side of the Trojans. In Iliad Artemis on her chariot with the golden reigns, kills the daughter of Bellerophon. Bellorophone was a divine Greek hero who killed the monster Chimera. At the beginning of the Greek's journey to Troy, Artemis punished Agamemnon after he killed a sacred stag in a sacred grove and boasted that he was a better hunter than the goddess. When the Greek fleet was preparing at Aulis to depart for Troy to commence the Trojan War, Artemis becalmed the winds. The seer Calchas erroneously advised Agamemnon that the only way to appease Artemis was to sacrifice his daughter Iphigenia. In some version of the myth, Artemis then snatched Iphigenia from the altar and substituted a deer; in others, Artemis allowed Iphigenia to be sacrificed. In versions where Iphigenia survived, a number of different myths have been told about what happened after Artemis took her; either she was brought to Tauris and led the priests there, or she became Artemis' immortal companion.Aeneas was also helped by Artemis, Leto, and Apollo. Apollo found him wounded by Diomedes and lifted him to heaven. There, the three deities secretly healed him in a great chamber. During the theomachy, Artemis found herself standing opposite of Hera, on which a scholium to the Iliad wrote that they represent the Moon versus the air around the Earth. Artemis chided her brother Apollo for not fighting Poseidon and told him never to brag again; Apollo did not answer her. An angry Hera berated Artemis for daring to fight her: How now art thou fain, thou bold and shameless thing, to stand forth against me? No easy foe I tell thee, am I, that thou shouldst vie with me in might, albeit thou bearest the bow, since it was against women that Zeus made thee a lion, and granted thee to slay whomsoever of them thou wilt. In good sooth it is better on the mountains to be slaying beasts and wild deer than to fight amain with those mightier than thou. Howbeit if thou wilt, learn thou of war, that thou mayest know full well how much mightier am I, seeing thou matchest thy strength with mine. Hera then grabbed Artemis' hands by the wrists, and holding her in place, beat her with her own bow. Crying, Artemis left her bow and arrows where they lay and ran to Olympus to cry at her father Zeus' knees, while her mother Leto picked up her bow and arrows and followed her weeping daughter. Worship Artemis, the goddess of forests and hills, was worshipped throughout ancient Greece. Her best known cults were on the island of Delos (her birthplace), in Attica at Brauron and Mounikhia (near Piraeus), and in Sparta. She was often depicted in paintings and statues in a forest setting, carrying a bow and arrows and accompanied by a deer. The ancient Spartans used to sacrifice to her as one of their patron goddesses before starting a new military campaign. Athenian festivals in honor of Artemis included Elaphebolia, Mounikhia, Kharisteria, and Brauronia. The festival of Artemis Orthia was observed in Sparta. Pre-pubescent and adolescent Athenian girls were sent to the sanctuary of Artemis at Brauron to serve the Goddess for one year. During this time, the girls were known as arktoi, or little she-bears. A myth explaining this servitude states that a bear had formed the habit of regularly visiting the town of Brauron, and the people there fed it, so that, over time, the bear became tame. A girl teased the bear, and, in some versions of the myth, it killed her, while, in other versions, it clawed out her eyes. Either way, the girl's brothers killed the bear, and Artemis was enraged. She demanded that young girls "act the bear" at her sanctuary in atonement for the bear's death. Artemis was worshipped as one of the primary goddesses of childbirth and midwifery along with Eileithyia. Dedications of clothing to her sanctuaries after a successful birth was common in the Classical era. Artemis could be a deity to be feared by pregnant women, as deaths during this time were attributed to her. As childbirth and pregnancy was a very common and important event, there were numerous other deities associated with it, many localized to a particular geographic area, including but not limited to Aphrodite, Hera and Hekate. It was considered a good sign when Artemis appeared in the dreams of hunters and pregnant women, but a naked Artemis was seen as an ill omen. According to Pseudo-Apollodorus, she assisted her mother in the delivery of her twin. Older sources, such as Homeric Hymn to Delian Apollo (in Line 115), have the arrival of Eileithyia on Delos as the event that allows Leto to give birth to her children. Contradictory is Hesiod's presentation of the myth in Theogony, where he states that Leto bore her children before Zeus' marriage to Hera with no commentary on any drama related to their birth. Despite her being primarily known as a goddess of hunting and the wilderness, she was also connected to dancing, music, and song like her brother Apollo; she is often seen singing and dancing with her nymphs, or leading the chorus of the Muses and the Graces at Delphi. In Sparta, girls of marriageable age performed the partheneia (choral maiden songs) in her honor. An ancient Greek proverb, written down by Aesop, went "For where did Artemis not dance?", signifying the goddess' connection to dancing and festivity. During the Classical period in Athens, she was identified with Hekate. Artemis also assimilated Caryatis (Carya). There was a women's cult at Cyzicus worshiping Artemis, which was called Dolon (Δόλων). Festivals Artemis was born on the sixth day of the month Thargelion (around May), which made it sacred for her, as her birthday. On the seventh day of the same month was Apollo's birthday. Artemis was worshipped in many festivals throughout Greece mainland and the islands, Asia Minor and south Italy. Most of these festivals were celebrated during spring. Attica Athens. The festival Elaphebolia was celebrated on the sixth day of the month Elaphebolion (ninth month) . The name is related to elaphos (deer) and Artemis is the Deer Huntress. Cakes made from flour, honey, and sesame and in the shape of stags were offered to the goddess during the festival. Brauron. The festival was remarkable for the arkteia, where girls, aged between five and ten, were dressed in saffron robes and played at being bears, or "act the bear" to appease the goddess after she sent the plague when her bear was killed. Another commentator says that girls had to ‘placate the goddess for their virginity (parthenia), so that they would not be the object of revenge from her. Piraeus. The festival of Artemis Munichia was celebrated on the 6th or 16th day of the month Munichion (tenth month). Young girls were dressed up as bears, as for the Brauronia. In the temple have been found sherds grom the geometric period. The festival commemorated the victory of the Greek fleet over the Persians at Salamis. Athens. Artemis had a filial cult of Brauronia, near the Acropolis. Agrae, a district of Athens, with a temple of Artemis-Agrotera. (huntress) On the 6th day of the month Boedromion , an armed procession would take a large numbe of goats to the temple. They would all be sacrificed in honor of the victory at the Battle of Marathon. The festival was called "Charisteria", also known as the Athenian "Thanksgiving". Myrrhinus, a deme near Merenda (Markopoulo).There was a cult of Kolainis. Kolainis is usually identified with Artemis Amarysia in Euboia. Some rites and animal sacrifices were probably similar with the rites of Laphria. Athmonia, a deme near Marousi. The festival of Artemis Amarysia, was no less splendid that the festival of Amarysia in Euboea. Halae Araphenides, a deme near Brauron. The fest Tauropolia was celebrated in honour of Artemis Tauropolos. During the festival a human sacrifice was represented in a ritual. Erchia a district of Athens. The modern Athenian airport was built over the ruins of the deme. A festival was celebrated on the 16th day of the month Metageitnion. Sacrifices were offered to Artemis and Hekate. Central Greece Hyampolis in Phocis. During an attack of the Thessalians, the Phocians terrified gathered together in one spot their women, children, movable property, and also their clothes, gold and made a vast pyre. The order was that if they would be defeated, all should be killed and would be thrown into the flames together with their property. The Phocians achieved a great victory and each year they celebrated their victory in the festival Elaphebolia-Laphria in honour of Artemis. All kinds of oferrings were burned in an annual fire, reminding the great pyre of the battle. Delphi in Phocis. The festival Laphria was celebrated in the month Laphrios. The cult of Artemis Laphria was introduced by the priests of Delphi Lab(r)yaden who had probably Cretan origin. Laphria is certainly the Pre-Greek "Mistress of the animals". Delphi in Phocis . The festival Eucleia was celebrated in honour of Artemis. According to the Labyaden-incriptions the oferrings darata are determined by the specified gamela and pedēia. Eucleia was a godess of marriage. Tithorea in Ancient Phocis. It seems that the festival of Isis was a reform of the festival of Artemis Laphria. Erineos in Doris. Festival of Artemis Laphria, indicated by the month Laphrios in the local calendar. Antikyra in Phocis.Cult of Artemis-Diktynaia, a popular goddess who was worshipped with great respect. Thebes in Boeotia. Before marriage a premilinary sacrifice should be made by the bride and the groom to Artemis-Eucleia. Amarynthos in Euboia. Festival of Artemis Amarysia. Animals were sacrificed with rites probably similar with the fest Laphria. Aulis in Boeotia. In a festival all kinds of sacrificial animals were oferred to the goddess. It seems that the festival was a reverberation of the rites of Laphria. Calydon in Aetolia. Calydon is considered the origin of the cult of Artemis Laphria at Patras. In the Aetolian calendar there was the month Laphrios. Near the city there was the temple of Apollo Laphrius; Nafpaktos in Aetolia. Cult of Artemis Laphria. Acarnania. Cult of Artemis-Agrotera (huntress) in a society of hunters. Peloponnese Patras in Achaea. The great festival Laphria was celebrated in honour of Artemis. The characteristic rite was the annual fire. Birds, deers, sacrificial animals, young wolves and young bears were thrown alive in a great pyre. Laphria (Pre-Greek name) is the "Mistress of Animals". Traditionally her cult was introduced from Calydon of Aetolia. Patras. The Ionians who lived in Ancient Achaea celebrated the annual festival of Artemis Triclaria. Pausanias mentions the legend of human sacrifices to the outraged goddess. The new deity Dionysus, put an end to the sacrifices . Corinth. The festival Eucleia was celebrated in honor of Artemis. Aigeira in Achaea. Festival of Artemis Agrotera (huntress). When the Sicyonians attacked the city, the Aigeirians tied torches on all goats of the area and during night they set the torches alight. The Sicyonians believed that Aigeira had a great army and they retreated. Sparta. Festival of Artemis-Orthia. The goddess was associated with the female initiatory rite Partheneion. Women performed round dances. In a legend Theseus stole Helene from the dancing floor of Orthia, during the round-dancing. The significant prize of the competetions was an iron sickle (drepanē) indicating that Orthia was a goddess of vegetation. Sparta on the road to Amyklai. Artemis-Korythalia was a goddess of vegetation. Women performed lascivious dances. The fest was celebrated in round huts covered with leaves. The nurses brought the infants in the temple of Korythalia during the fest Tithenedia. Messene near the borders with Laconia. Festival of Artemis Limnatis (of the lake). The festival was celebrated with cymbals and dances. The goddess was worshipped by young women during the festivals of transition from childhood to adulthood. Dereion on Taygetos in Laconia. Cult of Artemis -Dereatis. The festival was celebrated with the hymns calavoutoi and with the obscene dance callabis. Epidauros Limera in Laconia. Cult of Artemis-Limnatis. Caryae on the borders between Laconia and Arcadia . Festival of Artemis-Caryatis, a goddess of vegetation related to the tree-cult. Each year women performed an exstatic dance called the caryatis. Boiai in Laconia. Cult of Artemis-Soteira (savior), which was related to the myrtle tree. When the inhabitants of the cities near the gulf were expelled, Artemis with the shape of a hare guided them to a myrtle tree where they built the new city. Gytheion in Laconia. Cult of Artemis Laphria, in the month Laphrios. Elis . Pelops (Peloponnese: Pelop's island) had won the sovereignity of Pisa and his followers celebrated their victory near the temple of Artemis-Kordaka. They danced the peculiar dance kordax. Elis . Festival of Artemis-Elaphia in the month Elaphios (elaphos:deer). Elaphia was a goddess of hunting. Letrinoi in Elis . Festival of Artemis Alpheaia. Girls wearing masks performed dances. Olympia in Elis. Annual festival (panegeris) of Artemis Alpheaia . Olympia in Elis. Annual festival of Artemis Elaphia. Olympia in Elis. Annual festival of Artemis Daphnaia (of the laurel-branch), as a goddess of vegetation. Hypsus in Arcadia near the borders of Laconia. Annual festival of Artemis-Diktynna. Her temple was built near the sea. Hypsus . Annual fest of Artemis Daphnaia.(Of the laurel-branch). Stymphalus in Arcadia . Festival of Artemis-Stymphalia. The festival begun near the Katavothres where the water overflowed and created a big marsh. Orchomenus, in Arcadia. A sanctuary was built for Artemis Hymnia where her festival was celebrated every year. Tegea in Arcadia, on the road to Laconia. Cult of Artemis-Limnatis (of the lake). Phigalia in Arcadia. In a battle the Phigalians expelled the conquerors Spartans and recovered their city. On the summit of the Acropolis they built the sanctuary of Artemis-Soteira (Savior) and a statue of the goddess. At the beginning of festivals, all their processions started from the sunctuary. Troizen in Argolis. Festival of Artemis-Saronia. Near the temple was the grave of the king Saron who was drowned into the sea. Northern Greece Aegae, in Macedonia. Eucleia had a shrine with dedications in the agora of the city. The goddess is associated with Artemis-Eucleia, the goddess of marriage who was widely worshipped in Boeotia. Apollonia of Chalcidice. The festival Elaphebolia was celebrated in honor of Artemis in the month Elaphebolion Greek islands Icaria. The Tauropolion, the temple of Artemis Tauropolos was built at Oinoe. There was another smaller temenos that was sacred to Artemis-Tauropolos on the coast of the island. Cephalonia. Cult of Artemis-Laphria who is related to the legend of Britomartis. Corcyra. Cult of Artemis-Laphria in the month Laphrios. Asia Minor Ephesus in Ionia. The great festival Artemisia was celebreted in honor of Artemis. The wealth and splendor of temple and city were taken as evidence of Artemis Ephesia's power. Under Hellenic rule, and later, under Roman rule, the Ephesian Artemisia festival was increasingly promoted as a key element in the pan-Hellenic festival circuit . Perga in Ionia. Famous festival of Artemis-Pergaia. Under Roman rule Diana-Pergaia is identified with Selene. Iasos in Caria. The festival Elaphebolia was celebrated in honor of Artemis in the month Elaphebolion Byzantion. Festival of Artemis-Eucleia in the month Eucleios. Magna Graecia Syracuse in Sicily. The festival of Artemis Chitonia was distinguished by a peculiar dance and by a music on the flute. Chitonia (wearing a loose tunic) was a goddess of hunting. Syracuse in Sicily. Festival of Artemis-Lyaia. Men from the countryside came to the city in a rustic dress. They carried a deer-antler on their head and holded a shepherd's stab. They sang satirical songs drinking wine. The festival was the link between the comic performance and the countryside. Tauromenion in Sicily. Festival of Artemis-Eucleia in the month Eucleios. Festival of Artemis-Korythalia. The male dancers wore wooden masks. Attributes Virginity An important aspect of Artemis' persona and worship was her virginity, which may seem contradictory, given her role as a goddess associated with childbirth. The idea of Artemis as a virgin goddess likely is related to her primary role as a huntress. Hunters traditionally abstained from sex prior to the hunt as a form of ritual purity and out of a belief that the scent would scare off potential prey. The ancient cultural context in which Artemis' worship emerged also held that virginity was a prerequisite to marriage, and that a married woman became subservient to her husband. In this light, Artemis' virginity is also related to her power and independence. Rather than a form of asexuality, it is an attribute that signals Artemis as her own master, with power equal to that of male gods. Her virginity also possibly represents a concentration of fertility that can be spread among her followers, in the manner of earlier mother-goddess figures. However, some later Greek writers did come to treat Artemis as inherently asexual and as an opposite to Aphrodite. Furthermore, some have described Artemis along with the goddesses Hestia and Athena as being asexual; this is mainly supported by the fact that in the Homeric Hymns, 5, To Aphrodite, Aphrodite is described as having "no power" over the three goddesses. As a mother goddess Despite her virginity, both modern scholars and ancient commentaries have linked Artemis to the archetype of the mother goddess. Artemis was traditionally linked to fertility and was petitioned to assist women with childbirth. According to Herodotus, Greek playwright Aeschylus identified Artemis with Persephone as a daughter of Demeter. Her worshipers in Arcadia also traditionally associated her with Demeter and Persephone. In Asia Minor, she was often conflated with local mother-goddess figures, such as Cybele, and Anahita in Iran. The archetype of the mother goddess, though, was not highly compatible with the Greek pantheon, and though the Greeks had adopted the worship of Cybele and other Anatolian mother goddesses as early as the seventh century BCE, she was not directly conflated with any Greek goddesses. Instead, bits and pieces of her worship and aspects were absorbed variously by Artemis, Aphrodite, and others as Eastern influence spread. As the Lady of Ephesus At Ephesus in Ionia, Turkey, her temple became one of the Seven Wonders of the World. It was probably the best-known center of her worship except for Delos. There, the Lady whom the Ionians associated with Artemis through interpretatio graeca was worshipped primarily as a mother goddess, akin to the Phrygian goddess Cybele, in an ancient sanctuary where her cult image depicted the "Lady of Ephesus" adorned with multiple large beads. Excavation at the site of the Artemision in 1987–88 identified a multitude of tear-shaped amber beads that had been hung on the original wooden statue (xoanon), and these were probably carried over into later sculpted copies. In Acts of the Apostles, Ephesian metalsmiths who felt threatened by Saint Paul's preaching of Christianity, jealously rioted in her defense, shouting "Great is Artemis of the Ephesians!" Some scholars contend that the statement "saved by childbearing" in the First Epistle to Timothy is a reference to Artemis's midwifery. Of the 121 columns of Artemis's temple, only one composite, made up of fragments, still stands as a marker of the temple's location. As a lunar deity No records have been found of the Greeks referring to Artemis as a lunar deity, as their lunar deity was Selene, but the Romans identified Artemis with Selene leading them to perceive her as a lunar deity, though the Greeks did not refer to her or worship her as such. As the Romans began to associate Apollo more with Helios, the personification of the Sun, it was only natural that the Romans would then begin to identify Apollo's twin sister, Artemis, with Helios' own sister, Selene, the personification of the Moon. Evidence of the syncretism of Artemis and Selene is found early on; a scholium on the Iliad, claiming to be reporting sixth century BCE author Theagenes's interpretation of the theomachy in Book 21, says that in the fight between Artemis and Hera, Artemis represents the Moon, while Hera represents the earthly air. Active references to Artemis as an illuminating goddess start much later. Notably, Roman-era author Plutarch writes how during the Battle of Salamis, Artemis led the Athenians to victory by shining with the full moon, but all lunar-related narratives of this event come from Roman times, and none of the contemporary writers (such as Herodotus) makes any mention of the night or the Moon. Artemis' connection to childbed and women's labour naturally led to her becoming associated with the menstrual cycle in course of time, thus the Moon. Selene, just like Artemis, was linked to childbirth, as it was believed that women had the easiest labours during the full moon, paving thus the way for the two goddesses to be seen as the same. On that, Cicero writes: Apollo, a Greek name, is called Sol, the sun; and Diana, Luna, the moon. [...] Luna, the moon, is so called a lucendo (from shining); she bears the name also of Lucina: and as in Greece the women in labor invoke Diana Lucifera, Association to health was another reason Artemis and Selene were syncretized; Strabo wrote that Apollo and Artemis were connected to the Sun and the Moon, respectively, which was due to the changes the two celestial bodies caused in the temperature of the air, as the twins were gods of pestilential diseases and sudden deaths. Roman authors applied Artemis/Diana's byname, "Phoebe", to Luna/Selene, the same way as "Phoebus" was given to Helios due to his identification with Apollo. Another epithet of Artemis that Selene appropriated is "Cynthia", meaning "born in Mount Cynthus." The goddesses Artemis, Selene, and Hecate formed a triad, identified as the same goddess with three avatars: Selene in the sky (moon), Artemis on earth (hunting), and Hecate beneath the earth (Underworld). In Italy, those three goddesses became a ubiquitous feature in depictions of sacred groves, where Hecate/Trivia marked intersections and crossroads along with other liminal deities. The Romans enthusiastically celebrated the multiple identities of Diana as Hecate, Luna, and Trivia. Roman poet Horace in his odes enjoins Apollo to listen to the prayers of the boys, as he asks Luna, the "two-horned queen of the stars", to listen to those of the girls in place of Diana, due to their role as protectors of the young. In Virgil's Aeneid, when Nisus addresses Luna/the Moon, he calls her "daughter of Latona." In works of art, the two goddesses were mostly distinguished; Selene is usually depicted as being shorter than Artemis, with a rounder face, and wearing a long robe instead of a short hunting chiton, with a billowing cloak forming an arc above her head. Artemis was sometimes depicted with a lunate crown. As Hecate Hecate was the goddess of crossroads, boundaries, ghosts and witchcraft. She is the queen of the witches. Artemis absorbed the Pre-Greek goddess Potnia Theron who was closely associated with the daimons. In the Mycenean age daimons were lesser deities of ghosts, divine spirits and tutelary deities. Some scholars believe that Hecate was an aspect of Artemis prior to the latter's adoption into the Olympian pantheon. Artemis would have, at that point, become more strongly associated with purity and maidenhood on the one hand, while her originally darker attributes like her association with magic, the souls of the dead, and the night would have continued to be worshipped separately under her title Hecate. Both goddesses carried torches, and were accompanied by a dog. It seems that the character of Artemis in Arcadia was original. At Acacesium Artemis Hegemone is depicted holding two torches, and at Lycosura Artemis is depicted holding a snake and a torch. A bitch suitable for hunting was lying down by her side. Sophocles calles Artemis Amphipyros, carrying a torch in each hand, however the adjective refers also to the twin fire on the two peaks of the mountain Parnassus behind Delphi. In the fest of Laphria at Delphi Artemis is related to the Pre-Greek mistress of the animals, with barbaric sacrifices and possible connections with magic and ghosts since Potnia Theron was close to the daimons. The annual fire was the characteristique custom of the fest. At Kerameikos in Athens Artemis is clearly identified with Hecate. Pausanias believes that Kalliste (the most beautiful ) is a surname of Artemis carrying a torch. In Thessaly the distinctly local goddess Enodia with the surname Pheraia is identified with Hecate. Artemis Pheraia was worshipped in Argos, Athens and Sicyon. Symbols Bow and arrow In Iliad and Odyssey, Artemis is a goddess of hunting, which was a very important sport for the Myceneans. She had a golden bow and arrows and the epithets was Chrisilakatos, she of the golden shaft and Iocheaira, shooter of arrows or archer queen. The arrows of Artemis could also sudden death, a belief which appears also in Indoeuropean folklore and religion (Rudra). The arrows of the goddess bring an immediate and mild death without a previous disease. Apollo and Artemis kill with their arrows the children of Niobe because she offended her mother Leto. Chariots Homer uses the epithet Chrisinios, of the golden reigns, to illustrate the chariot of the goddess of hunting. At the fest of Laphria at Delphi the priestess followed the parade on a chariot which was covered with the skin of a deer. Spears, nets, and lyre Artemis is rarely portrayed with a hunting spear. In her cult in Aetolia, the Artemis Aetole was depicted with a hunting spear or javelin. Artemis is also sometimes depicted with a fishing spear connected with her cult as a patron goddess of fishing. This conception relates her with Diktynna (Britomartis). As a goddess of maiden dances and songs, Artemis is often portrayed with a lyre in ancient art. Deer Deer were the only animals held sacred to Artemis herself. On seeing a deer larger than a bull with horns shining, she fell in love with these creatures and held them sacred. Deer were also the first animals she captured. She caught five golden-horned deer and harnessed them to her chariot. At Lycosura in isolated Arcadia Artemis is depicted holding a snake and a torch and dressed with a deer skin, besides Demeter and Persephone. It seems that the depictions of Artemis and Demeter-Melaina (black) in Arcadia correspond to the earliest conceptions of the first Greeks in Greece. At the fest of Laphria at Delphi the priestess followed the parade on a chariot which was covered with the skin of a deer. The third labour of Heracles, commanded by Eurystheus, consisted of chasing and catching the terrible Ceryneian Hind. The hind was a female deer with golden andlers and hooves of bronze and was sacred to Artemis. Heracles begged Artemis for forgiveness and promised to return it alive. Artemis forgave him, but targeted Eurystheus for her wrath. Hunting dog In a legend Artemis got her hunting dogs from Pan in the forest of Arcadia. Pan gave Artemis two black-and-white dogs, three reddish ones, and one spotted one – these dogs were able to hunt even lions. Pan also gave Artemis seven bitches of the finest Arcadian race, but Artemis only ever brought seven dogs hunting with her at any one time. In the earliest conceptions of Artemis at Lycosura, a bitch suitable for hunting was lying down by her side. Bear In a Pre-Greek cult Artemis was conceived as a bear. Kallisto was transformed into a bear, and she is a hypostasis of Artemis with a theriomorph form. In the cults of Artemis at Brauron and at Piraeus Munichia (arkteia) young virgin girls were disguished to she-bears (arktoi) in a ritual and they served the goddess before marriage. An etiological myth tries to explain the origin of the Arkteia. Every year, a girl between five and ten years of age was sent to Artemis' temple at Brauron. A bear was tamed by Artemis and introduced to the people of Athens. They touched it and played with it until one day a group of girls poked the bear until it attacked them. A brother of one of the girls killed the bear, so Artemis sent a plague in revenge. The Athenians consulted an oracle to understand how to end the plague. The oracle suggested that, in payment for the bear's blood, no Athenian virgin should be allowed to marry until she had served Artemis in her temple (played the bear for the goddess). In a legend of the cult of Munichia if someone killed a bear, then they were to be punished by sacrificing their daughter in the sanctuary. Embaros disguised his daughter dressing her like a bear (arktos), and hid her in the adyton. He placed a goat on the altar and he sacrificed the goat instead of his daughter. Boar The boar is one of the favorite animals of the hunters, and also hard to tame. In honor of Artemis' skill, they sacrificed it to her. Oeneus and Adonis were both killed by Artemis' boar. In The Odyssey, she descends from a peak and she travels along the ridges of Mount Erymanthos, that was sacred to the "Mistress of the animals". When the goddess became wrathful she would send the terrible Erymanthian boar to laid waste the farmer's fields. Heracles managed to kill the terrible creature during his Twelve Labors. In one legend, the Calydonian boar had terrorized the territory of Calydon because Artemis (the mistress of the animals) was offended. The Calydonian boar hunt is one of the great heroic adventures in Greek legend. The most famous Greek heroes including Meleager and Atalanta took part in the expedition. The fierce-hunter virgin Atalanta allied to the goddess Artemis was the first who wounded the Calydonian boar. Ovid describes the boar as follows: A dreadful boar.—His burning, bloodshot eyes seemed coals of living fire, and his rough neck was knotted with stiff muscles, and thick-set with bristles like sharp spikes. A seething froth dripped on his shoulders, and his tusks were like the spoils of Ind [India]. Discordant roars reverberated from his hideous jaws; and lightning—belched forth from his horrid throat— scorched the green fields. — Ovid, Metamorphoses 8.284–289 (Brookes More translation) Guinea fowl Artemis felt pity for the Calydonian princesses Meleagrids as they mourned for their lost brother, Meleager, so she transformed them into Guinea fowl to be her favorite animals. Buzzard hawk Hawks were the favored birds of many of the gods, Artemis included. Bull Artemis is sometimes identified with the mythical bull-goddess in a cult foreign in Greece. The cult can be identified in Halae Araphenides in Attica, where at the end of the peculiar fest a man was sacrificed.Euripides relates her cult with Tauris (tauros:bull) and with the myth of Iphigenia at Brauron. Orestes brought the image of the goddess from Tauris, to Brauron Sparta or Aricia. Torch Artemis is often depicted holding one or two torches. There is not any sufficient explanation for this depiction. The character of the goddess in Arcadia seems to be original. At Acacesium Artemis Hegemone (the leader) is depicted holding two torches. At Lycosura the goddess is depicted holding a snake and a torch, and a bitch suitable for hunting was lying down by her sideSophocles calls Artemis "Elaphebolos, (deer slayer) Amphipyros (with a fire in each end)" reminding the annual fire of the fest Laphria at Delphi. The adjective refers also to the twin fires of the two peaks of the Mount Parnassus above Delphi (Phaedriades). Heshychius believes that Kalliste is the name of Hecate established at Kerameikos of Athens, who some call Artemis (torch bearing). On a relief from Sicily the goddess is depicted holding a torch in one hand and an offering on the other. The torch was used for the ignition of the fire on the altar. Archaic and classical art During the Bronze Age, the "mistress of the animals" is usually depicted between two lions with a peculiar crown on her head. The oldest representations of Artemis in Greek Archaic art portray her as Potnia Theron ("Queen of the Beasts"): a winged goddess holding a stag and lioness in her hands, or sometimes a lioness and a lion. Potnia theron is the only Greek goddess close to the daimons and sometimes is depicted with a Gorgon head, and the Gorgon is her distant ancestor. This winged Artemis lingered in ex-votos as Artemis Orthia, with a sanctuary close by Sparta. In Greek classical art she is usually portrayed as a maiden huntress, young, tall, and slim, clothed in a girl's short skirt, with hunting boots, a quiver, a golden or silver bow and arrows. Often, she is shown in the shooting pose, and is accompanied by a hunting dog or stag. When portrayed as a lunar deity, Artemis wore a long robe and sometimes a veil covered her head. Her darker side is revealed in some vase paintings, where she is shown as the death-bringing goddess whose arrows fell young maidens and women, such as the daughters of Niobe. Artemis was sometimes represented in Classical art with the crown of the crescent moon, such as also found on Luna and others. On June 7, 2007, a Roman-era bronze sculpture of Artemis and the Stag was sold at Sotheby's auction house in New York state by the Albright-Knox Art Gallery for $25.5 million. Modern art Legacy In astronomy 105 Artemis (an asteroid discovered in 1868) Artemis (crater) (a tiny crater on the moon, named in 2010) Artemis Chasma (a nearly circular fracture on the surface of the planet Venus, described in 1980) Artemis Corona (an oval feature largely enclosed by the Artemis Chasma, also described in 1980) Acronym (ArTeMiS) for "Architectures de bolometres pour des Telescopes a grand champ de vue dans le domaine sub-Millimetrique au Sol", a large bolometer camera in the submillimeter range that was installed in 2010 at the Atacama Pathfinder Experiment (APEX), located in the Atacama Desert in northern Chile. In taxonomy The taxonomic genus Artemia, which entirely comprises the family Artemiidae, derives from Artemis. Artemia species are aquatic crustaceans known as brine shrimp, the best-known species of which, Artemia salina, or sea monkeys, was first described by Carl Linnaeus in his Systema Naturae in 1758. Artemia species live in salt lakes, and although they are almost never found in an open sea, they do appear along the Aegean coast near Ephesus, where the Temple of Artemis once stood. In modern spaceflight The Artemis program is an ongoing robotic and crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, the Japan Aerospace Exploration Agency, and the Canadian Space Agency. The program has the goal of landing "the first woman and the next man" on the lunar south pole region no earlier than 2025. Genealogy See also Bendis Dali (goddess) Janus Lunar deity Palermo Fragment Regarding Tauropolos: Bull (mythology) Iphigenia in Tauris Taurus (Mythology) References Bibliography Aelian, On Animals, Volume III: Books 12-17, translated by A. F. Scholfield, Loeb Classical Library No. 449, Cambridge, Massachusetts, Harvard University Press, 1959. Online version at Harvard University Press. . Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Aratus Solensis, Phaenomena translated by G. R. Mair. Loeb Classical Library Volume 129. London: William Heinemann, 1921. Online version at the Topos Text Project. Athenaeus, The Learned Banqueters, Volume V: Books 10.420e-11. Edited and translated by S. Douglas Olson. Loeb Classical Library 274. Cambridge, MA: Harvard University Press, 2009. Budin, Stephanie, Artemis, Routledge publications, 2016, . Google books. Burkert, Walter, Greek Religion, Harvard University Press, 1985. . Callimachus. Hymns, translated by Alexander William Mair (1875–1928). London: William Heinemann; New York: G.P. Putnam's Sons. 1921. Internet Archive. Online version at the Topos Text Project. Celoria, Francis, The Metamorphoses of Antoninus Liberalis: A Translation with a Commentary, Routledge, 1992. . Cicero, Nature of the Gods, from the Treatises of M.T. Cicero, translated by Charles Duke Yonge (1812-1891), Bohn edition of 1878, in the public domain. Text available online at Topos text. Collins-Clinton, Jacquelyn, Cosa: The Sculpture and Furnishings in Stone and Marble, University of Michigan Press, 2020, . Google books. Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888–1890. Greek text available at the Perseus Digital Library. Evelyn-White, Hugh, The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White. Homeric Hymns. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Google Books. Internet Archive. Fontenrose, Joseph Eddy, Orion: The Myth of the Hunter and the Huntress, University of California Press, 1981. . Forbes Irving, P. M. C., Metamorphosis in Greek Myths, Clarendon Press Oxford, 1990. . Freeman, Kathleen, Ancilla to the Pre-Socratic Philosophers: A Complete Translation of the Fragments in Diels, Fragmente Der Vorsokratiker, Harvard University Press, 1983. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Robert Graves (1955) 1960. The Greek Myths (Penguin) Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Hansen, William, Handbook of Classical Mythology, ABC-CLIO, 2004. . Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Hesiod, Astronomia, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Internet Archive. Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, De Astronomica, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Kerényi, Karl (1951), The Gods of the Greeks, Thames and Hudson, London, 1951. Liddell, Henry George, Robert Scott, A Greek-English Lexicon, revised and augmented throughout by Sir Henry Stuart Jones with the assistance of Roderick McKenzie, Clarendon Press Oxford, 1940. Online version at the Perseus Digital Library. Mikalson, Jon D., The Sacred and Civil Calendar of the Athenian Year, Princeton University Press, 1975. Google books. Morford, Mark P. O., Robert J. Lenardon, Classical Mythology, Eighth Edition, Oxford University Press, 2007. . Internet Archive. Most, G.W., Hesiod, Theogony, Works and Days, Testimonia, Edited and translated by Glenn W. Most, Loeb Classical Library No. 57, Cambridge, Massachusetts, Harvard University Press, 2018. . Online version at Harvard University Press. Most, G.W., Hesiod: The Shield, Catalogue of Women, Other Fragments, Loeb Classical Library, No. 503, Cambridge, Massachusetts, Harvard University Press, 2007, 2018. . Online version at Harvard University Press. Nonnus, Dionysiaca; translated by Rouse, W H D, in three volumes. Loeb Classical Library No. 346, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive. Ovid, Metamorphoses, Brookes More, Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. Ovid. Metamorphoses, Volume I: Books 1-8. Translated by Frank Justus Miller. Revised by G. P. Goold. Loeb Classical Library No. 42. Cambridge, Massachusetts: Harvard University Press, 1977, first published 1916. . Online version at Harvard University Press. Ovid, Ovid's Fasti: With an English translation by Sir James George Frazer, London: W. Heinemann LTD; Cambridge, Massachusetts, Harvard University Press, 1959. Internet Archive. The J. Paul Getty Museum Journal: Volume 24, 1996, . Google books. The Oxford Classical Dictionary, second edition, Hammond, N.G.L. and Howard Hayes Scullard (editors), Oxford University Press, 1992. . Pannen, Imke, When the Bad Bleeds: Mantic Elements in English Renaissance Revenge Tragedy, Volume 3 of Representations & Reflections; V&R unipress GmbH, 2010. . Papathomopoulos, Manolis, Antoninus Liberalis: Les Métamorphoses, Collection Budé, Paris, Les Belles Lettres, 1968. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Pindar, The Odes of Pindar including the Principal Fragments with an Introduction and an English Translation by Sir John Sandys, Litt.D., FBA. Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1937. Greek text available at the Perseus Digital Library. Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. . Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . West, M. L. (2003), Greek Epic Fragments: From the Seventh to the Fifth Centuries BC, edited and translated by Martin L. West, Loeb Classical Library No. 497, Cambridge, Massachusetts, Harvard University Press, 2003. . Online version at Harvard University Press. External links Theoi Project, Artemis, information on Artemis from original Greek and Roman sources, images from classical art. A Dictionary of Greek and Roman Antiquities (1890) (eds. G. E. Marindin, William Smith, LLD, William Wayte) Fischer-Hansen T., Poulsen B. (eds.) From Artemis to Diana: the goddess of man and beast. Collegium Hyperboreum and Museum Tusculanum Press, Copenhagen, 2009 Warburg Institute Iconographic Database (ca 1,150 images of Artemis) Animal goddesses Childhood goddesses Hunting goddesses Lunar goddesses Nature goddesses Night goddesses Greek virgin goddesses Mythological Greek archers Children of Zeus Divine twins Deities in the Iliad Metamorphoses characters Rape of Persephone Dog deities Deities in the Aeneid Light goddesses Bear deities Women in Greek mythology Mountain goddesses Dance goddesses Tree goddesses Health goddesses Women of the Trojan war Fertility goddesses Twelve Olympians Plague goddesses Music and singing goddesses Mythological hunters Kourotrophoi Shapeshifters in Greek mythology Wolf deities
2919
https://en.wikipedia.org/wiki/Ana%C3%AFs%20Nin
Anaïs Nin
Angela Anaïs Juana Antolina Rosa Edelmira Nin y Culmell ( , ; February 21, 1903 – January 14, 1977) was a French-born American diarist, essayist, novelist, and writer of short stories and erotica. Born to Cuban parents in France, Nin was the daughter of the composer Joaquín Nin and the classically trained singer Rosa Culmell. Nin spent her early years in Spain and Cuba, about sixteen years in Paris (1924–1940), and the remaining half of her life in the United States, where she became an established author. Nin wrote journals prolifically from age eleven until her death. Her journals, many of which were published during her lifetime, detail her private thoughts and personal relationships. Her journals also describe her marriages to Hugh Parker Guiler and Rupert Pole, in addition to her numerous affairs, including those with psychoanalyst Otto Rank and writer Henry Miller, both of whom profoundly influenced Nin and her writing. In addition to her journals, Nin wrote several novels, critical studies, essays, short stories, and volumes of erotica. Much of her work, including the collections of erotica Delta of Venus and Little Birds, was published posthumously amid renewed critical interest in her life and work. Nin spent her later life in Los Angeles, California, where she died of cervical cancer in 1977. She was a finalist for the Neustadt International Prize for Literature in 1976. Early life Anaïs Nin was born in Neuilly, France, to Joaquín Nin, a Cuban pianist and composer of Catalan descent, and Rosa Culmell, a classically trained Cuban singer of French and Danish descent. Her father's grandfather had fled France during the Revolution, going first to Saint-Domingue, then New Orleans, and finally to Cuba where he helped build the country's first railway. Nin was raised a Roman Catholic but left the church when she was 16 years old. She spent her childhood and early life in Europe. Her parents separated when she was two; her mother then moved Anaïs and her two brothers, Thorvald Nin and Joaquín Nin-Culmell, to Barcelona, and then to New York City, where she attended high school. Nin would drop out of high school in 1919 at age sixteen, and according to her diaries, Volume One, 1931–1934, later began working as an artist's model. After being in the United States for several years, Nin had forgotten how to speak Spanish, but retained her French and became fluent in English. On March 3, 1923, in Havana, Cuba, Nin married her first husband, Hugh Parker Guiler (1898–1985), a banker and artist, later known as "Ian Hugo" when he became a maker of experimental films in the late 1940s. The couple moved to Paris the following year, where Guiler pursued his banking career and Nin began to pursue her interest in writing; in her diaries she also mentions having trained as a flamenco dancer in Paris in the mid-to-late 1920s with Francisco Miralles Arnau. Her first published work was a critical 1932 evaluation of D. H. Lawrence called D. H. Lawrence: An Unprofessional Study, which she wrote in sixteen days. Nin became profoundly interested in psychoanalysis and would study it extensively, first with René Allendy in 1932 and then with Otto Rank. Both men eventually became her lovers, as she recounts in her Journal. On her second visit to Rank, Nin reflects on her desire to be reborn as a woman and artist. Rank, she observes, helped her move back and forth between what she could verbalize in her journals and what remained unarticulated. She discovered the quality and depth of her feelings in the wordless transitions between what she could say and what she could not say. "As he talked, I thought of my difficulties with writing, my struggles to articulate feelings not easily expressed. Of my struggles to find a language for intuition, feeling, instincts which are, in themselves, elusive, subtle, and wordless." In the late summer of 1939, when residents from overseas were urged to leave France due to the approaching war, Nin left Paris and returned to New York City with her husband (Guiler was, according to his own wishes, edited out of the diaries published during Nin's lifetime; his role in her life is therefore difficult to gauge). During the war, Nin sent her books to Frances Steloff of the Gotham Book Mart in New York for safekeeping. In New York, Anaïs rejoined Otto Rank, who had previously moved there, and moved into his apartment. She actually began to act as a psychoanalyst herself, seeing patients in the room next to Rank's. She quit after several months, however, stating: "I found that I wasn't good because I wasn't objective. I was haunted by my patients. I wanted to intercede." It was in New York that she met the Japanese-American modernist photographer Soichi Sunami, who went on to photograph her for many of her books. Literary career Journals Nin's most studied works are her diaries or journals, which she began writing in her adolescence. The published journals, which span several decades from 1933 onward, provide a deeply explorative insight into her personal life and relationships. Nin was acquainted, often quite intimately, with a number of prominent authors, artists, psychoanalysts, and other figures, and wrote of them often, especially Otto Rank. Moreover, as a female author describing a primarily masculine constellation of celebrities, Nin's journals have acquired importance as a counterbalancing perspective. She initially wrote in French and did not begin to write in English until she was seventeen. Nin felt that French was the language of her heart, Spanish was the language of her ancestors, and English was the language of her intellect. The writing in her diaries is explicitly trilingual; she uses whichever language best expresses her thought. In the third volume of her unexpurgated journal, Incest, she wrote about her father candidly and graphically (207–15), detailing her adult sexual relationship with him. Previously unpublished works are coming to light in A Café in Space, the Anaïs Nin Literary Journal, which includes "Anaïs Nin and Joaquín Nin y Castellanos: Prelude to a SymphonyLetters between a father and daughter". So far sixteen volumes of her journals have been published. All but the last five of her adult journals are in expurgated form. Erotic writings Nin is hailed by many critics as one of the finest writers of female erotica. She was one of the first women known to explore fully the realm of erotic writing, and certainly the first prominent woman in the modern West known to write erotica. Before her, erotica acknowledged to be written by women was rare, with a few notable exceptions, such as the work of Kate Chopin. Nin often cited authors Djuna Barnes and D. H. Lawrence as inspirations, and she states in Volume One of her diaries that she drew inspiration from Marcel Proust, André Gide, Jean Cocteau, Paul Valéry, and Arthur Rimbaud. According to Volume One of her diaries, 1931–1934, published in 1966, Nin first came across erotica when she returned to Paris with her husband, mother and two brothers in her late teens. They rented the apartment of an American man who was away for the summer, and Nin came across a number of French paperbacks: "One by one, I read these books, which were completely new to me. I had never read erotic literature in America... They overwhelmed me. I was innocent before I read them, but by the time I had read them all, there was nothing I did not know about sexual exploits... I had my degree in erotic lore." Faced with a desperate need for money, Nin, Henry Miller and some of their friends began in the 1940s to write erotic and pornographic narratives for an anonymous "collector" for a dollar a page, somewhat as a joke. (It is not clear whether Miller actually wrote these stories or merely allowed his name to be used.) Nin considered the characters in her erotica to be extreme caricatures and never intended the work to be published, but changed her mind in the early 1970s and allowed them to be published as Delta of Venus and Little Birds. In 2016, a previously undiscovered collection of erotica, Auletris, was published for the first time. Nin was a friend, and in some cases lover, of many literary figures, including Miller, John Steinbeck, Antonin Artaud, Edmund Wilson, Gore Vidal, James Agee, James Leo Herlihy, and Lawrence Durrell. Her passionate love affair and friendship with Miller strongly influenced her both sexually and as an author. Claims that Nin was bisexual were given added circulation by the 1990 Philip Kaufman film Henry & June about Miller and his second wife June Miller. The first unexpurgated portion of Nin's journal to be published, Henry and June, makes it clear that Nin was stirred by June to the point of saying (paraphrasing), "I have become June," though it is unclear whether she consummated her feelings for her sexually. To both Anaïs and Henry, June was a femme fataleirresistible, cunning, erotic. Nin gave June money, jewelry, clothes; often leaving herself without money. Novels and other publications In addition to her journals and collections of erotica, Nin wrote several novels, which were frequently associated by critics with surrealism. Her first book of fiction, House of Incest (1936), contains heavily veiled allusions to a brief sexual relationship Nin had with her father in 1933: While visiting her estranged father in France, the then-thirty-year-old Nin had a brief incestuous sexual relationship with him. In 1944, she published a collection of short stories titled Under a Glass Bell, which were reviewed by Edmund Wilson. Nin was also the author of several works of non-fiction: Her first publication, written during her years studying psychoanalysis, was D. H. Lawrence: An Unprofessional Study (1932), an assessment of the works of D.H. Lawrence. In 1968, she published The Novel of the Future, which elaborated on her approach to writing and the writing process. Personal life According to her diaries, Vol. 1, 1931–1934, Nin shared a bohemian lifestyle with Henry Miller during her time in Paris. Her husband Guiler is not mentioned anywhere in the published edition of the 1930s parts of her diary (Vol. 1–2) although the opening of Vol. 1 makes it clear that she is married, and the introduction suggests her husband refused to be included in the published diaries. The diaries edited by her second husband, after her death, tell that her union with Miller was very passionate and physical, and that she believed that it was a pregnancy by him that she aborted in 1934. In 1947, at the age of 44, she met former actor Rupert Pole in a Manhattan elevator on her way to a party. The two ended up dating and traveled to California together; Pole was sixteen years her junior. On March 17, 1955, while still married to Guiler, she married Pole at Quartzsite, Arizona, returning with him to live in California. Guiler remained in New York City and was unaware of Nin's second marriage until after her death in 1977, though biographer Deirdre Bair alleges that Guiler knew what was happening while Nin was in California, but consciously "chose not to know". Nin referred to her simultaneous marriages as her "bicoastal trapeze". According to Deidre Bair: In 1966, Nin had her marriage with Pole annulled, due to the legal issues arising from both Guiler and Pole trying to claim her as a dependent on their federal tax returns. Though the marriage was annulled, Nin and Pole continued to live together as if they were married, up until her death in 1977. According to Barbara Kraft, prior to her death Anaïs had written to Hugh Guiler asking for his forgiveness. He responded by writing how meaningful his life had been because of her. After Guiler's death in 1985, the unexpurgated versions of her journals were commissioned by Pole. Six volumes have appeared (Henry and June, Fire, Incest, Nearer the Moon, Mirages, and Trapeze). Pole arranged for Guiler's ashes to be scattered in the same area where Anaïs's ashes were scattered, a place called Mermaid Cove off the Pacific coast. Pole died in July 2006. Nin once worked at Lawrence R. Maxwell Books, located at 45 Christopher Street in New York City. In addition to her work as a writer, Nin appeared in the Kenneth Anger film Inauguration of the Pleasure Dome (1954) as Astarte; in the Maya Deren film Ritual in Transfigured Time (1946); and in Bells of Atlantis (1952), a film directed by Guiler under the name "Ian Hugo" with a soundtrack of electronic music by Louis and Bebe Barron. In her later life, Nin worked as a tutor at the International College in Los Angeles. Death Nin was diagnosed with cervical cancer in 1974. She battled the cancer for several years as it metastasized, and underwent numerous surgical operations, radiation, and chemotherapy. Nin died of the cancer at Cedars-Sinai Medical Center in Los Angeles, California, on January 14, 1977. Her body was cremated, and her ashes were scattered over Santa Monica Bay in Mermaid Cove. Her first husband, Hugh Guiler, died in 1985, and his ashes were scattered in the cove as well. Rupert Pole was named Nin's literary executor, and he arranged to have new, unexpurgated editions of Nin's books and diaries published between 1985 and his death in 2006. Large portions of the diaries are still available only in the expurgated form. The originals are located in the UCLA Library. Legacy The explosion of the feminist movement in the 1960s gave feminist perspectives on Nin's writings of the past twenty years, which made Nin a popular lecturer at various universities; contrarily, Nin dissociated herself from the political activism of the movement. In 1973, prior to her death, Nin received an honorary doctorate from the Philadelphia College of Art. She was also elected to the United States National Institute of Arts and Letters in 1974, and in 1976 was presented with a Los Angeles Times Woman of the Year award. The Italian film La stanza delle parole [dubbed into English as The Room of Words] was released in 1989 based on the Henry and June diaries. Philip Kaufman directed the 1990 film Henry & June based on Nin's diaries published as Henry and June: From the Unexpurgated Diary of Anaïs Nin. She was portrayed in the film by actress Maria de Medeiros. In February 2008, poet Steven Reigns organized Anaïs Nin at 105 at the Hammer Museum in Westwood, Los Angeles. Reigns said: "Nin bonded and formed very deep friendships with women and men decades younger than her. Some of them are still living in Los Angeles and I thought it'd be wonderful to have them share their experiences with [Nin]." Bebe Barron, electronic music pioneer and longtime friend of Nin, made her last public appearance at this event. Reigns also published an essay refuting Bern Porter's claims of a sexual relationship with Nin in the 1930s. Cuban-American writer Daína Chaviano paid homage to Anaïs Nin and Henry Miller in her novel Gata encerrada (2001), where both characters are portrayed as disembodied spirits whose previous lives they shared with Melisa, the main character—and presumably Chaviano's alter ego—, a young Cuban obsessed with Anaïs Nin. The Cuban poet and novelist Wendy Guerra, long fascinated with Nin's life and works, published a fictional diary in Nin's voice, Posar desnuda en la Habana (Posing Nude in Havana) in 2012. She explained that "[Nin's] Cuban Diary has very few pages and my delirium was always to write an apocryphal novel; literary conjecture about what might have happened". On September 27, 2013, screenwriter and author Kim Krizan published an article in The Huffington Post revealing she had found a previously unpublished love letter written by Gore Vidal to Nin. This letter contradicts Gore Vidal's previous characterization of his relationship with Nin, showing that Vidal did have feelings for Nin that he later heavily disavowed in his autobiography, Palimpsest. Krizan did this research in the run up to the release of the fifth volume of Anaïs Nin's uncensored diary, Mirages, for which Krizan provided the foreword. In 2015, The Erotic Adventures of Anais Nin a documentary film directed by Sarah Aspinall, was released, in which Lucy Cohu portrayed Nin's character. In 2019, Kim Krizan published Spy in the House of Anaïs Nin, an examination of long-buried letters, papers, and original manuscripts Krizan found while doing archival work in Nin's Los Angeles home. Also that year, Routledge published the book Anaïs Nin: A Myth of Her Own by Clara Oropeza, that analyzes Nin's literature and literary theory through the perspective of mythological studies and depth psychology. In 2002 Alissa Levy Caiano produced a short film called 'The All-Seeing' based on Nin's short story of the same name in Under a Glass Bell. In 2021, the Porn film company Thousand Faces released a short film called 'Mathilde' based on Nin's story of the same name in Delta of Venus. Bibliography Diaries The Early Diary of Anaïs Nin (1914–1931), in four volumes The Diary of Anaïs Nin, in seven volumes, edited by herself Henry and June: From A Journal of Love. The Unexpurgated Diary of Anaïs Nin (1931–1932) (1986), edited by Rupert Pole after her death Incest: From a Journal of Love (1992) Fire: From A Journal of Love (1995) Nearer the Moon: From A Journal of Love (1996) Mirages: The Unexpurgated Diary of Anaïs Nin, 1939–1947 (2013) Trapeze: The Unexpurgated Diary of Anaïs Nin, 1947–1955 (2017) The Diary of Others: The Unexpurgated Diary of Anaïs Nin, 1955–1966 (2021) A Joyous Transformation: The Unexpurgated Diary of Anaïs Nin, 1966–1977 (forthcoming) Correspondence Letters to a friend in Australia (1992) A Literate Passion: Letters of Anaïs Nin & Henry Miller (1987) Arrows of Longing: Correspondence Between Anaïs Nin & Felix Pollack, 1952–1976 (1998) Reunited: The Correspondence of Anaïs and Joaquin Nin, 1933–1940 (2020) Letters to Lawrence Durrell 1937–1977 (2020) Novels House of Incest (1936) Winter of Artifice (1939) Cities of the Interior (1959), in five volumes: Ladders to Fire Children of the Albatross The Four-Chambered Heart A Spy in the House of Love Seduction of the Minotaur, originally published as Solar Barque (1958). Collages (1964) Short stories Waste of Timelessness: And Other Early Stories (written before 1932, published posthumously) Under a Glass Bell (1944) Delta of Venus (1977) Little Birds (1979) Auletris (2016) Non-fiction D. H. Lawrence: An Unprofessional Study (1932) The Novel of the Future (1968) In Favor of the Sensitive Man (1976) The Mystic of Sex: Uncollected Writings: 1930-1974 (1995) Filmography Ritual in Transfigured Time (1946): Short film, dir. Maya Deren Bells of Atlantis (1952): Short film, dir. Ian Hugo Inauguration of the Pleasure Dome (1954): Short film, dir. Kenneth Anger Melodic Inversion (1958) Lectures pour tous (1964) Anaïs Nin Her Diary (1966) Un moment avec une grande figure de la littérature, Anaïs Nin, (3 May 1968) Anaïs Nin at the University of California, Berkeley, (December 1971) Anaïs Nin at Hampshire College, (1972) '''Ouvrez les guillemets, (11 November 1974) Journal de Paris, (21 November 1974) Anais Nin Observed (1974): Documentary, dir. Robert Snyder See also List of Cuban American writers List of Cuban Americans Citations Works cited Further reading Oropeza, Clara. (2019) Anaïs Nin: A Myth of Her Own, Routledge Yaguchi, Yuko. (2022) Anaïs Nin's Paris Revisited The English–French Bilingual Edition (French Edition), Wind Rose-Suiseisha Bita, Lili. (1994) "Anais Nin". EI Magazine of European Art Center (EUARCE), Is. 7/1994 pp. 9, 24–30 External links The Official Anaïs Nin Blog Sky Blue Press Preserving and promoting her literary work. Anaïs Nin.com Thinking of Anaïs Nin Anaïs Nin Foundation Contact the Anaïs Nin estate for rights and permissions requests Ian Hugo (Nin's husband) Anais Nin's Hideaway Home in Los Angeles (2022-03-21 in The New York Times) 1903 births 1977 deaths 20th-century American non-fiction writers 20th-century American women writers 20th-century diarists 20th-century French essayists American diarists American people of Catalan descent American writers of Cuban descent American people of Danish descent Analysands of Otto Rank Analysands of René Allendy Burials at sea Burials in California Deaths from cancer in California Deaths from cervical cancer Former Roman Catholics French emigrants to the United States French erotica writers French novelists French people of Cuban descent French people of Catalan descent French people of Danish descent French short story writers Writers from Neuilly-sur-Seine People with acquired American citizenship Polyandry Women diarists Women erotica writers Writers from Paris
2923
https://en.wikipedia.org/wiki/AIM%20%28software%29
AIM (software)
AIM (AOL Instant Messenger) was an instant messaging and presence computer program created by AOL, which used the proprietary OSCAR instant messaging protocol and the TOC protocol to allow registered users to communicate in real time. AIM was popular by the late 1990s, in United States and other countries, and was the leading instant messaging application in that region into the following decade. Teens and college students were known to use the messenger's away message feature to keep in touch with friends, often frequently changing their away message throughout a day or leaving a message up with one's computer left on to inform buddies of their ongoings, location, parties, thoughts, or jokes. AIM's popularity declined as AOL subscribers started decreasing and steeply towards the 2010s, as Gmail's Google Talk, SMS, and Internet social networks, like Facebook gained popularity. Its fall has often been compared with other once-popular Internet services, such as Myspace. In June 2015, AOL was acquired by Verizon Communications. In June 2017, Verizon combined AOL and Yahoo into its subsidiary Oath Inc. (now called Yahoo). The company discontinued AIM as a service on December 15, 2017. History In May 1997, AIM was released unceremoniously as a stand-alone download for Microsoft Windows. AIM was an outgrowth of "online messages" in the original platform written in PL/1 on a Stratus computer by Dave Brown. At one time, the software had the largest share of the instant messaging market in North America, especially in the United States (with 52% of the total reported ). This does not include other instant messaging software related to or developed by AOL, such as ICQ and iChat. During its heyday, its main competitors were ICQ (which AOL acquired in 1998), Yahoo! Messenger and MSN Messenger. AOL particularly had a rivalry or "chat war" with PowWow and Microsoft, starting in 1999. There were several attempts from Microsoft to simultaneously log into their own and AIM's protocol servers. AOL was unhappy about this and started blocking MSN Messenger from being able to access AIM. This led to efforts by many companies to challenge the AOL and Time Warner merger on the grounds of antitrust behaviour, leading to the formation of the OpenNet Coalition. Official mobile versions of AIM appeared as early as 2001 on Palm OS through the AOL application. Third-party applications allowed it to be used in 2002 for the Sidekick. A version for Symbian OS was announced in 2003 as were others for BlackBerry and Windows Mobile After 2012, stand-alone official AIM client software included advertisements and was available for Microsoft Windows, Windows Mobile, Classic Mac OS, macOS, Android, iOS, and BlackBerry OS. Usage decline and product sunset Around 2011, AIM started to lose popularity rapidly, partly due to the quick rise of Gmail and its built-in real-time Google Chat instant messenger integration in 2011 and because many people migrated to SMS or iMessages text messaging and later, social networking websites and apps for instant messaging, in particular, Facebook Messenger, which was released as a standalone application the same year. AOL made a partnership to integrate AIM messaging in Google Talk, and had a feature for AIM users to send SMS messages directly from AIM to any number, as well as for SMS users to send an IM to any AIM user. As of June 2011, one source reported AOL Instant Messenger market share had collapsed to 0.73%. However, this number only reflected installed IM applications, and not active users. The engineers responsible for AIM claimed that they were unable to convince AOL management that free was the future. On March 3, 2012, AOL ended employment of AIM's development staff while leaving it active and with help support still provided. On October 6, 2017, it was announced that the AIM service would be discontinued on December 15; however, a non-profit development team known as Wildman Productions started up a server for older versions of AOL Instant Messenger, known as AIM Phoenix. The "AIM Man" The AIM mascot was designed by JoRoan Lazaro and was implemented in the first release in 1997. This was a yellow stickman-like figure, often called the "Running Man". The mascot appeared on all AIM logos and most wordmarks, and always appeared at the top of the buddy list. AIM's popularity in the late 1990s and the 2000s led to the "Running Man" becoming a familiar brand on the Internet. After over 14 years, the iconic logo disappeared as part of the AIM rebranding in 2011. However, in August 2013, the "Running Man" returned. It was used for other AOL services like AOL Top Speed. In 2014, a Complex editor called it a "symbol of America". In April 2015, the Running Man was officially featured in the Virgin London Marathon, dressed by a person for the AOL-partnered Free The Children charity. Protocol The standard protocol that AIM clients used to communicate is called Open System for CommunicAtion in Realtime (OSCAR). Most AOL-produced versions of AIM and popular third party AIM clients use this protocol. However, AOL also created a simpler protocol called TOC that lacks many of OSCAR's features, but was sometimes used for clients that only require basic chat functionality. The TOC/TOC2 protocol specifications were made available by AOL, while OSCAR is a closed protocol that third parties had to reverse-engineer. In January 2008, AOL introduced experimental Extensible Messaging and Presence Protocol (XMPP) support for AIM, allowing AIM users to communicate using the standardized, open-source XMPP. However, in March 2008, this service was discontinued. In May 2011, AOL started offering limited XMPP support. On March 1, 2017, AOL announced (via XMPP-login-time messages) that the AOL XMPP gateway would be desupported, effective March 28, 2017. Privacy For privacy regulations, AIM had strict age restrictions. AIM accounts are available only for people over the age of 13; children younger than that were not permitted access to AIM. Under the AIM Privacy Policy, AOL had no rights to read or monitor any private communications between users. The profile of the user had no privacy. In November 2002, AOL targeted the corporate industry with Enterprise AIM Services (EAS), a higher security version of AIM. If public content was accessed, it could be used for online, print or broadcast advertising, etc. This was outlined in the policy and terms of service: "... you grant AOL, its parent, affiliates, subsidiaries, assigns, agents and licensees the irrevocable, perpetual, worldwide right to reproduce, display, perform, distribute, adapt and promote this Content in any medium". This allowed anything users posted to be used without a separate request for permission. AIM's security was called into question. AOL stated that it had taken great pains to ensure that personal information will not be accessed by unauthorized members, but that it cannot guarantee that it will not happen. AIM was different from other clients, such as Yahoo! Messenger, in that it did not require approval from users to be added to other users' buddy lists. As a result, it was possible for users to keep other unsuspecting users on their buddy list to see when they were online, read their status and away messages, and read their profiles. There was also a Web API to display one's status and away message as a widget on one's webpage. Though one could block a user from communicating with them and seeing their status, this did not prevent that user from creating a new account that would not automatically be blocked and therefore able to track their status. A more conservative privacy option was to select a menu feature that only allowed communication with users on one's buddy list; however, this option also created the side-effect of blocking all users who were not on one's buddy list. Users could also choose to be invisible to all. Chat robots AOL and various other companies supplied robots (bots) on AIM that could receive messages and send a response based on the bot's purpose. For example, bots could help with studying, like StudyBuddy. Some were made to relate to children and teenagers, like Spleak. Others gave advice. The more useful chat bots had features like the ability to play games, get sport scores, weather forecasts or financial stock information. Users were able to talk to automated chat bots that could respond to natural human language. They were primarily put into place as a marketing strategy and for unique advertising options. It was used by advertisers to market products or build better consumer relations. Before the inclusions of such bots, the other bots DoorManBot and AIMOffline provided features that were provided by AOL for those who needed it. ZolaOnAOL and ZoeOnAOL were short-lived bots that ultimately retired their features in favor of SmarterChild. URI scheme AOL Instant Messenger's installation process automatically installed an extra URI scheme ("protocol") handler into some Web browsers, so URIs beginning "aim:" could open a new AIM window with specified parameters. This was similar in function to the mailto: URI scheme, which created a new e-mail message using the system's default mail program. For instance, a webpage might have included a link like the following in its HTML source to open a window for sending a message to the AIM user notarealuser: <a href="aim:goim?screenname=notarealuser">Send Message</a> To specify a message body, the message parameter was used, so the link location would have looked like this: aim:goim?screenname=notarealuser&message=This+is+my+message To specify an away message, the message parameter was used, so the link location would have looked like this: aim:goaway?message=Hello,+my+name+is+Bill When placing this inside a URL link, an AIM user could click on the URL link and the away message "Hello, my name is Bill" would instantly become their away message. To add a buddy, the addbuddy message was used, with the "screenname" parameter aim:addbuddy?screenname=notarealuser This type of link was commonly found on forum profiles to easily add contacts. Vulnerabilities AIM had security weaknesses that have enabled exploits to be created that used third-party software to perform malicious acts on users' computers. Although most were relatively harmless, such as being kicked off the AIM service, others performed potentially dangerous actions, such as sending viruses. Some of these exploits relied on social engineering to spread by automatically sending instant messages that contained a Uniform Resource Locator (URL) accompanied by text suggesting the receiving user click on it, an action which leads to infection, i.e., a trojan horse. These messages could easily be mistaken as coming from a friend and contain a link to a Web address that installed software on the user's computer to restart the cycle. Users also have reported sudden additions of toolbars and advertisements from third parties in the newer version of AIM. Multiple complaints about the lack of control of third party involvement have caused many users to stop using the service. Extra features iPhone application On March 6, 2008, during Apple Inc.'s iPhone SDK event, AOL announced that they would be releasing an AIM application for iPhone and iPod Touch users. The application was available for free from the App Store, but the company also provides a paid version, which displays no advertisements. Both were available from the App Store. The AIM client for iPhone and iPod Touch supported standard AIM accounts, as well as MobileMe accounts. There was also an express version of AIM accessible through the Safari browser on the iPhone and iPod Touch. In 2011, AOL launched an overhaul of their Instant Messaging service. Included in the update was a brand new iOS application for iPhone and iPod Touch that incorporated all the latest features. A brand new icon was used for the application, featuring the new cursive logo for AIM. The user-interface was entirely redone for the features including: a new buddy list, group messaging, in-line photos and videos, as well as improved file-sharing. Version 5.0.5, updated in March 2012, it supported more social stream features, much like Facebook and Twitter, as well as the ability to send voice messages up to 60 seconds long. iPad application On April 3, 2010, Apple released the first generation iPad. Along with this newly released device AOL released the AIM application for iPad. It was built entirely from scratch for the new version iOS with a specialized user-interface for the device. It supports geo location, Facebook status updates and chat, Myspace, Twitter, YouTube, Foursquare and many social networking platforms. AIM Express AIM Express ran in a pop-up browser window. It was intended for use by people who are unwilling or unable to install a standalone application or those at computers that lack the AIM application. AIM Express supported many of the standard features included in the stand-alone client, but did not provide advanced features like file transfer, audio chat, video conferencing, or buddy info. It was implemented in Adobe Flash. It was an upgrade to the prior AOL Quick Buddy, which was later available for older systems that cannot handle Express before being discontinued. Express and Quick Buddy were similar to MSN Web Messenger and Yahoo! Web Messenger. This web version evolved into AIM.com's web-based messenger. AIM Pages AIM Pages was a free website released in May 2006 by AOL in replacement of AIMSpace. Anyone who had an AIM user name and was at least 16 years of age could create their own web page (to display an online, dynamic profile) and share it with buddies from their AIM Buddy list. Layout AIM Pages included links to the email and Instant Message of the owner, along with a section listing the owners "buddies", which included AIM user names. It was possible to create modules in a Module T microformat. Video hosting sites like Netflix and YouTube could be added to ones AIM Page, as well as other sites like Amazon.com. It was also possible to insert HTML code. The main focus of AIM Pages was the integration of external modules, like those listed above, into the AOL Instant Messenger experience. Discontinuation By late 2007, AIM Pages had been discontinued. After AIM Pages shutdown, links to AIM Pages were redirected to AOL Lifestream, AOL's new site aimed at collecting external modules in one place, independent of AIM buddies. AOL Lifestream was shut down February 24, 2017. AIM for Mac AOL released an all-new AIM for the Mac on September 29, 2008, and the final build on December 15, 2008. The redesigned AIM for Mac is a full universal binary Cocoa API application that supports both Tiger and Leopard — Mac OS X 10.4.8 (and above) or Mac OS X 10.5.3 (and above). On October 1, 2009, AOL released AIM 2.0 for Mac. AIM real-time IM This feature is available for AIM 7 and allows for a user to see what the other is typing as it is being done. It was developed and built with assistance from Trace Research and Development Centre at University of Wisconsin–Madison and Gallaudet University. The application provides visually impaired users the ability to convert messages from text (words) to speech. For the application to work users must have AIM 6.8 or higher, as it is not compatible with older versions of AIM software, AIM for Mac or iChat. AIM to mobile (messaging to phone numbers) This feature allows text messaging to a phone number (text messaging is less functional than instant messaging). Discontinued features AIM Phoneline AIM Phoneline was a Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided via the AIM application. It was also known to work with Apple's iChat Client. The service was officially closed to its customers on January 13, 2009. The closing of the free service caused the number associated with the service to be disabled and not transferable for a different service. AIM Phoneline website was recommending users switch to a new service named AIM Call Out, also discontinued now. Launched on May 16, 2006, AIM Phoneline provided users the ability to have several local numbers, allowing AIM users to receive free incoming calls. The service allowed users to make calls to landlines and mobile devices through the use of a computer. The service, however, was only free for receiving and AOL charged users $14.95 a month for an unlimited calling plan. In order to use AIM Phoneline users had to install the latest free version of AIM Triton software and needed a good set of headphones with a boom microphone. It could take several days after a user signed up before it started working. AIM Call Out AIM Call Out is a discontinued Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided by AOL via its AIM application that replaced the defunct AIM Phoneline service in November 2007. It did not depend on the AIM client and could be used with only an AIM screenname via the WebConnect feature or a dedicated SIP device. The AIM Call Out service was shut down on March 25, 2009. Security On November 4, 2014, AIM scored one out of seven points on the Electronic Frontier Foundation's secure messaging scorecard. AIM received a point for encryption during transit, but lost points because communications are not encrypted with a key to which the provider has no access, i.e., the communications are not end-to-end encrypted, users can't verify contacts' identities, past messages are not secure if the encryption keys are stolen, (i.e., the service does not provide forward secrecy), the code is not open to independent review, i.e., the code is not open-source), the security design is not properly documented, and there has not been a recent independent security audit. BlackBerry Messenger, Ebuddy XMS, Hushmail, Kik Messenger, Skype, Viber, and Yahoo! Messenger also scored one out of seven points. See also Comparison of cross-platform instant messaging clients List of defunct instant messaging platforms References External links 1997 software Android (operating system) software Instant Messenger BlackBerry software Classic Mac OS instant messaging clients Computer-related introductions in 1997 Cross-platform software Defunct instant messaging clients Instant messaging clients Internet properties disestablished in 2017 IOS software MacOS instant messaging clients Online chat Symbian software Unix instant messaging clients Videotelephony Windows instant messaging clients
2925
https://en.wikipedia.org/wiki/Ackermann%20function
Ackermann function
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. Its value grows very rapidly; for example, results in , an integer of 19,729 decimal digits. History In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function (the Greek letter phi). Ackermann's three-argument function, , is defined such that for , it reproduces the basic operations of addition, multiplication, and exponentiation as and for p > 2 it extends these basic operations in a way that can be compared to the hyperoperations: (Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.) In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers. Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors. The generalized hyperoperation sequence, e.g. , is a version of Ackermann function as well. In 1963 R.C. Buck based an intuitive two-variable variant on the hyperoperation sequence: Compared to most other versions Buck's function has no unessential offsets: Many other versions of Ackermann function have been investigated. Definition Definition: as m-ary function Ackermann's original three-argument function is defined recursively as follows for nonnegative integers and : Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers and as follows: The Ackermann function has also been expressed in relation to the hyperoperation sequence: or, written in Knuth's up-arrow notation (extended to integer indices ): or, equivalently, in terms of Buck's function F: Definition: as iterated 1-ary function Define as the n-th iterate of : Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so . Conceiving the Ackermann function as a sequence of unary functions, one can set . The function then becomes a sequence of unary functions, defined from iteration: Computation The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS). TRS, based on 2-ary function The definition of the 2-ary Ackermann function leads to the obvious reduction rules Example Compute The reduction sequence is To compute one can use a stack, which initially contains the elements . Then repeatedly the two top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 2 elements; PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3 } The pseudocode is published in . For example, on input , Remarks The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code. For all the computation of takes no more than steps. pointed out that in the computation of the maximum length of the stack is , as long as . Their own algorithm, inherently iterative, computes within time and within space. TRS, based on iterated 1-ary function The definition of the iterated 1-ary Ackermann functions leads to different reduction rules As function composition is associative, instead of rule r6 one can define Like in the previous section the computation of can be implemented with a stack. Initially the stack contains the three elements . Then repeatedly the three top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 3 elements; PUSH 1 or 3 or 5 elements, applying the rules r4, r5, r6; } Example On input the successive stack configurations are The corresponding equalities are When reduction rule r7 is used instead of rule r6, the replacements in the stack will follow The successive stack configurations will then be The corresponding equalities are Remarks On any given input the TRSs presented so far converge in the same number of steps. They also use the same reduction rules (in this comparison the rules r1, r2, r3 are considered "the same as" the rules r4, r5, r6/r7 respectively). For example, the reduction of converges in 14 steps: 6 × r1, 3 × r2, 5 × r3. The reduction of converges in the same 14 steps: 6 × r4, 3 × r5, 5 × r6/r7. The TRSs differ in the order in which the reduction rules are applied. When is computed following the rules {r4, r5, r6}, the maximum length of the stack stays below . When reduction rule r7 is used instead of rule r6, the maximum length of the stack is only . The length of the stack reflects the recursion depth. As the reduction according to the rules {r4, r5, r7} involves a smaller maximum depth of recursion, this computation is more efficient in that respect. TRS, based on hyperoperators As — or — showed explicitly, the Ackermann function can be expressed in terms of the hyperoperation sequence: or, after removal of the constant 2 from the parameter list, in terms of Buck's function Buck's function , a variant of Ackermann function by itself, can be computed with the following reduction rules: Instead of rule b6 one can define the rule To compute the Ackermann function it suffices to add three reduction rules These rules take care of the base case A(0,n), the alignment (n+3) and the fudge (-3). Example Compute The matching equalities are when the TRS with the reduction rule is applied: when the TRS with the reduction rule is applied: Remarks The computation of according to the rules {b1 - b5, b6, r8 - r10} is deeply recursive. The maximum depth of nested s is . The culprit is the order in which iteration is executed: . The first disappears only after the whole sequence is unfolded. The computation according to the rules {b1 - b5, b7, r8 - r10} is more efficient in that respect. The iteration simulates the repeated loop over a block of code. The nesting is limited to , one recursion level per iterated function. showed this correspondence. These considerations concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules b6 and b7 are considered "the same"). The reduction of for instance converges in 35 steps: 12 × b1, 4 × b2, 1 × b3, 4 × b5, 12 × b6/b7, 1 × r9, 1 × r10. The modus iterandi only affects the order in which the reduction rules are applied. A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance . published a cunning algorithm which computes within time and within space. Huge numbers To demonstrate how the computation of results in many steps and in a large number: Table of values Computing the Ackermann function can be restated in terms of an infinite table. First, place the natural numbers along the top row. To determine a number in the table, take the number immediately to the left. Then use that number to look up the required number in the column given by that number and one row up. If there is no number to its left, simply look at the column headed "1" in the previous row. Here is a small upper-left portion of the table: The numbers here which are only expressed with recursive exponentiation or Knuth arrows are very large and would take up too much space to notate in plain decimal digits. Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively. This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly: Properties General remarks It may not be immediately obvious that the evaluation of always terminates. However, the recursion is bounded because in each recursive application either decreases, or remains the same and decreases. Each time that reaches zero, decreases, so eventually reaches zero as well. (Expressed more technically, in each case the pair decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when decreases there is no upper bound on how much can increase — and it will often increase greatly. For small values of m like 1, 2, or 3, the Ackermann function grows relatively slowly with respect to n (at most exponentially). For , however, it grows much more quickly; even is about 2.00353, and the decimal expansion of is very large by any typical measure, about 2.12004. An interesting aspect is that the only arithmetic operation it ever uses is addition of 1. Its fast growing power is based solely on nested recursion. This also implies that its running time is at least proportional to its output, and so is also extremely huge. In actuality, for most cases the running time is far larger than the output; see above. A single-argument version that increases both and at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that is roughly comparable to in the fast-growing hierarchy. This extreme growth can be exploited to show that which is obviously computable on a machine with infinite memory such as a Turing machine and so is a computable function, grows faster than any primitive recursive function and is therefore not primitive recursive. Not primitive recursive The Ackermann function grows faster than any primitive recursive function and therefore is not itself primitive recursive. The sketch of the proof is this: a primitive recursive function defined using up to k recursions must grow slower than , the (k+1)-th function in the fast-growing hierarchy, but the Ackermann function grows at least as fast as . Specifically, one shows that to every primitive recursive function there exists a non-negative integer such that for all non-negative integers , Once this is established, it follows that itself is not primitive recursive, since otherwise putting would lead to the contradiction The proof proceeds as follows: define the class of all functions that grow slower than the Ackermann function and show that contains all primitive recursive functions. The latter is achieved by showing that contains the constant functions, the successor function, the projection functions and that it is closed under the operations of function composition and primitive recursion. Inverse Since the function considered above grows very rapidly, its inverse function, f, grows very slowly. This inverse Ackermann function f−1 is usually denoted by α. In fact, α(n) is less than 5 for any practical input size n, since is on the order of . This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees. Sometimes Ackermann's original function or other variations are used in these settings, but they all grow at similarly high rates. In particular, some modified functions simplify the expression by eliminating the −3 and similar terms. A two-parameter variation of the inverse Ackermann function can be defined as follows, where is the floor function: This function arises in more precise analyses of the algorithms mentioned above, and gives a more refined time bound. In the disjoint-set data structure, m represents the number of operations while n represents the number of elements; in the minimum spanning tree algorithm, m represents the number of edges while n represents the number of vertices. Several slightly different definitions of exist; for example, is sometimes replaced by n, and the floor function is sometimes replaced by a ceiling. Other studies might define an inverse function of one where m is set to a constant, such that the inverse applies to a particular row. The inverse of the Ackermann function is primitive recursive. Use as benchmark The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first published use of Ackermann's function in this way was in 1970 by Dragoș Vaida and, almost simultaneously, in 1971, by Yngve Sundblad. Sundblad's seminal paper was taken up by Brian Wichmann (co-author of the Whetstone benchmark) in a trilogy of papers written between 1975 and 1982. See also Computability theory Double recursion Fast-growing hierarchy Goodstein function Primitive recursive function Recursion (computer science) Notes References Bibliography External links An animated Ackermann function calculator Ackerman function implemented using a for loop Ackermann functions. Includes a table of some values. describes several variations on the definition of A. The Ackermann function written in different programming languages, (on Rosetta Code) ) Some study and programming. Arithmetic Large integers Special functions Theory of computation Computability theory
2928
https://en.wikipedia.org/wiki/Association%20for%20Computing%20Machinery
Association for Computing Machinery
The Association for Computing Machinery (ACM) is a US-based international learned society for computing. It was founded in 1947 and is the world's largest scientific and educational computing society. The ACM is a non-profit professional membership group, reporting nearly 110,000 student and professional members . Its headquarters are in New York City. The ACM is an umbrella organization for academic and scholarly interests in computer science (informatics). Its motto is "Advancing Computing as a Science & Profession". History In 1947, a notice was sent to various people: On January 10, 1947, at the Symposium on Large-Scale Digital Calculating Machinery at the Harvard computation Laboratory, Professor Samuel H. Caldwell of Massachusetts Institute of Technology spoke of the need for an association of those interested in computing machinery, and of the need for communication between them. [...] After making some inquiries during May and June, we believe there is ample interest to start an informal association of many of those interested in the new machinery for computing and reasoning. Since there has to be a beginning, we are acting as a temporary committee to start such an association: E. C. Berkeley, Prudential Insurance Co. of America, Newark, N. J. R. V. D. Campbell, Raytheon Manufacturing Co., Waltham, Mass. , Bureau of Standards, Washington, D.C. H. E. Goheen, Office of Naval Research, Boston, Mass. J. W. Mauchly, Electronic Control Co., Philadelphia, Pa. T. K. Sharpless, Moore School of Elec. Eng., Philadelphia, Pa. R. Taylor, Mass. Inst. of Tech., Cambridge, Mass. C. B. Tompkins, Engineering Research Associates, Washington, D.C. The committee (except for Curtiss) had gained experience with computers during World War II: Berkeley, Campbell, and Goheen helped build Harvard Mark I under Howard H. Aiken, Mauchly and Sharpless were involved in building ENIAC, Tompkins had used "the secret Navy code-breaking machines", and Taylor had worked on Bush's Differential analyzers. The ACM was then founded in 1947 under the name Eastern Association for Computing Machinery, which was changed the following year to the Association for Computing Machinery. The ACM History Committee since 2016 has published the A.M.Turing Oral History project, the ACM Key Award Winners Video Series, and the India Industry Leaders Video project. Activities ACM is organized into over 246 local professional chapters and 38 Special Interest Groups (SIGs), through which it conducts most of its activities. Additionally, there are over 833 college and university chapters. The first student chapter was founded in 1961 at the University of Louisiana at Lafayette. Many of the SIGs, such as SIGGRAPH, SIGDA, SIGPLAN, SIGCSE and SIGCOMM, sponsor regular conferences, which have become famous as the dominant venue for presenting innovations in certain fields. The groups also publish a large number of specialized journals, magazines, and newsletters. ACM also sponsors other computer science related events such as the worldwide ACM International Collegiate Programming Contest (ICPC), and has sponsored some other events such as the chess match between Garry Kasparov and the IBM Deep Blue computer. Services Publications ACM publishes over 50 journals including the prestigious Journal of the ACM, and two general magazines for computer professionals, Communications of the ACM (also known as Communications or CACM) and Queue. Other publications of the ACM include: ACM XRDS, formerly "Crossroads", was redesigned in 2010 and is the most popular student computing magazine in the US. ACM Interactions, an interdisciplinary HCI publication focused on the connections between experiences, people and technology, and the third largest ACM publication. ACM Computing Surveys (CSUR) Computers in Entertainment (CIE) ACM Journal on Emerging Technologies in Computing Systems (JETC) ACM Special Interest Group: Computers and Society (SIGCAS) A number of journals, specific to subfields of computer science, titled ACM Transactions. Some of the more notable transactions include: ACM Transactions on Algorithms (TALG) ACM Transactions on Embedded Computing Systems (TECS) ACM Transactions on Computer Systems (TOCS) IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) ACM Transactions on Computational Logic (TOCL) ACM Transactions on Computer-Human Interaction (TOCHI) ACM Transactions on Database Systems (TODS) ACM Transactions on Graphics (TOG) ACM Transactions on Mathematical Software (TOMS) ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) IEEE/ACM Transactions on Networking (TON) ACM Transactions on Programming Languages and Systems (TOPLAS) Although Communications no longer publishes primary research, and is not considered a prestigious venue, many of the great debates and results in computing history have been published in its pages. ACM has made almost all of its publications available to paid subscribers online at its Digital Library and also has a Guide to Computing Literature. ACM also offers insurance, online courses, and other services to its members. In 1997, ACM Press published Wizards and Their Wonders: Portraits in Computing (), written by Christopher Morgan, with new photographs by Louis Fabian Bachrach. The book is a collection of historic and current portrait photographs of figures from the computer industry. Portal and Digital Library The ACM Portal is an online service of the ACM. Its core are two main sections: ACM Digital Library and the ACM Guide to Computing Literature. The ACM Digital Library was launched in October 1997. It is the full-text collection of all articles published by the ACM in its articles, magazines and conference proceedings. The Guide is a bibliography in computing with over one million entries. The ACM Digital Library contains a comprehensive archive starting in the 1950s of the organization's journals, magazines, newsletters and conference proceedings. Online services include a forum called Ubiquity and Tech News digest. There is an extensive underlying bibliographic database containing key works of all genres from all major publishers of computing literature. This secondary database is a rich discovery service known as The ACM Guide to Computing Literature. ACM adopted a hybrid Open Access (OA) publishing model in 2013. Authors who do not choose to pay the OA fee must grant ACM publishing rights by either a copyright transfer agreement or a publishing license agreement. ACM was a "green" publisher before the term was invented. Authors may post documents on their own websites and in their institutional repositories with a link back to the ACM Digital Library's permanently maintained Version of Record. All metadata in the Digital Library is open to the world, including abstracts, linked references and citing works, citation and usage statistics, as well as all functionality and services. Other than the free articles, the full-texts are accessed by subscription. There is also a mounting challenge to the ACM's publication practices coming from the open access movement. Some authors see a subscription business model as less relevant and publish on their home pages or on unreviewed sites like arXiv. Other organizations have sprung up which do their peer review entirely free and online, such as Journal of Artificial Intelligence Research, Journal of Machine Learning Research and the Journal of Research and Practice in Information Technology. ACM has made its publications from 1951 to 2000 open access through its digital library on 7 April 2022 as part of its 75th anniversary of the organisation. Membership grades In addition to student and regular members, ACM has several advanced membership grades to recognize those with multiple years of membership and "demonstrated performance that sets them apart from their peers". The number of Fellows, Distinguished Members, and Senior Members cannot exceed 1%, 10%, and 25% of the total number of professional members, respectively. Fellows The ACM Fellows Program was established by Council of the Association for Computing Machinery in 1993 "to recognize and honor outstanding ACM members for their achievements in computer science and information technology and for their significant contributions to the mission of the ACM." There are 1,310 Fellows out of about 100,000 members. Distinguished Members In 2006, ACM began recognizing two additional membership grades, one which was called Distinguished Members. Distinguished Members (Distinguished Engineers, Distinguished Scientists, and Distinguished Educators) have at least 15 years of professional experience and 5 years of continuous ACM membership and "have made a significant impact on the computing field". In 2006 when the Distinguished Members first came out, one of the three levels was called "Distinguished Member" and was changed about two years later to "Distinguished Educator". Those who already had the Distinguished Member title had their titles changed to one of the other three titles. List of Distinguished Members of the Association for Computing Machinery Senior Members Also in 2006, ACM began recognizing Senior Members. According to the ACM, "The Senior Members Grade recognizes those ACM members with at least 10 years of professional experience and 5 years of continuous Professional Membership who have demonstrated performance through technical leadership, and technical or professional contributions". Senior membership also requires 3 letters of reference Distinguished Speakers While not technically a membership grade, the ACM recognizes distinguished speakers on topics in computer science. A distinguished speaker is appointed for a three-year period. There are usually about 125 current distinguished speakers. The ACM website describes these people as 'Renowned International Thought Leaders'. The distinguished speakers program (DSP) has been in existence for over 20 years and serves as an outreach program that brings renowned experts from Academia, Industry and Government to present on the topic of their expertise. The DSP is overseen by a committee Chapters ACM has three kinds of chapters: Special Interest Groups, Professional Chapters, and Student Chapters. , ACM has professional & SIG Chapters in 56 countries. , there exist ACM student chapters in 41 countries. Special Interest Groups SIGACCESS: Accessible Computing SIGACT: Algorithms and Computation Theory SIGAda: Ada Programming Language SIGAI: Artificial Intelligence SIGAPP: Applied Computing SIGARCH: Computer Architecture SIGBED: Embedded Systems SIGBio: Bioinformatics SIGCAS: Computers and Society SIGCHI: Computer–Human Interaction SIGCOMM: Data Communication SIGCSE: Computer Science Education SIGDA: Design Automation SIGDOC: Design of Communication SIGecom: Electronic Commerce SIGEVO: Genetic and Evolutionary Computation SIGGRAPH: Computer Graphics and Interactive Techniques SIGHPC: High Performance Computing SIGIR: Information Retrieval SIGITE: Information Technology Education SIGKDD: Knowledge Discovery and Data Mining SIGLOG: Logic and Computation SIGMETRICS: Measurement and Evaluation SIGMICRO: Microarchitecture SIGMIS: Management Information Systems SIGMM: Multimedia SIGMOBILE: Mobility of Systems, Users, Data and Computing SIGMOD: Management of Data SIGOPS: Operating Systems SIGPLAN: Programming Languages SIGSAC: Security, Audit, and Control SIGSAM: Symbolic and Algebraic Manipulation SIGSIM: Simulation and Modeling SIGSOFT: Software Engineering SIGSPATIAL: Spatial Information SIGUCCS: University and College Computing Services SIGWEB: Hypertext, Hypermedia, and Web Conferences ACM and its Special Interest Groups (SIGs) sponsors numerous conferences with 170 hosted worldwide in 2017. ACM Conferences page has an up-to-date complete list while a partial list is shown below. Most of the SIGs also have an annual conference. ACM conferences are often very popular publishing venues and are therefore very competitive. For example, SIGGRAPH 2007 attracted about 30000 attendees, while CIKM 2005 and RecSys 2022 had paper acceptance rates of only accepted 15% and 17% respectively. AIES: Conference on AI, Ethics, and Society ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems CHI: Conference on Human Factors in Computing Systems CIKM: Conference on Information and Knowledge Management COMPASS: International Conference on Computing and Sustainable Societies DAC: Design Automation Conference DEBS: Distributed Event Based Systems FAccT: Conference on Fairness, Accountability, and Transparency FCRC: Federated Computing Research Conference GECCO: Genetic and Evolutionary Computation Conference HT: Hypertext: Conference on Hypertext and Hypermedia JCDL: Joint Conference on Digital Libraries MobiHoc: International Symposium on Mobile Ad Hoc Networking and Computing SC: Supercomputing Conference SIGCOMM: ACM SIGCOMM Conference SIGCSE: SIGCSE Technical Symposium on Computer Science Education SIGGRAPH: International Conference on Computer Graphics and Interactive Techniques RecSys: ACM Conference on Recommender Systems TAPIA: Richard Tapia Celebration of Diversity in Computing Conference The ACM is a co–presenter and founding partner of the Grace Hopper Celebration of Women in Computing (GHC) with the Anita Borg Institute for Women and Technology. Some conferences are hosted by ACM student branches; this includes Reflections Projections, which is hosted by UIUC ACM. In addition, ACM sponsors regional conferences. Regional conferences facilitate increased opportunities for collaboration between nearby institutions and they are well attended. For additional non-ACM conferences, see this list of computer science conferences. Awards The ACM presents or co–presents a number of awards for outstanding technical and professional achievements and contributions in computer science and information technology. ACM A. M. Turing Award ACM – AAAI Allen Newell Award ACM Athena Lecturer Award ACM/CSTA Cutler-Bell Prize in High School Computing ACM Distinguished Service Award ACM Doctoral Dissertation Award ACM Eugene L. Lawler Award ACM Fellowship, awarded annually since 1993 ACM Gordon Bell Prize ACM Grace Murray Hopper Award ACM – IEEE CS George Michael Memorial HPC Fellowships ACM – IEEE CS Ken Kennedy Award ACM – IEEE Eckert-Mauchly Award ACM India Doctoral Dissertation Award ACM Karl V. Karlstrom Outstanding Educator Award ACM Paris Kanellakis Theory and Practice Award ACM Policy Award ACM Presidential Award ACM Prize in Computing (formerly: ACM – Infosys Foundation Award in the Computing Sciences) ACM Programming Systems and Languages Paper Award ACM Student Research Competition ACM Software System Award International Science and Engineering Fair Outstanding Contribution to ACM Award SIAM/ACM Prize in Computational Science and Engineering Over 30 of ACM's Special Interest Groups also award individuals for their contributions with a few listed below. ACM Alan D. Berenbaum Distinguished Service Award ACM Maurice Wilkes Award ISCA Influential Paper Award Leadership The President of ACM for 2022–2024 is Yannis Ioannidis, Professor at the National and Kapodistrian University of Athens. He is successor of Gabriele Kotsis (2020–2022), Professor at the Johannes Kepler University Linz; Cherri M. Pancake (2018–2020), Professor Emeritus at Oregon State University and Director of the Northwest Alliance for Computational Science and Engineering (NACSE); Vicki L. Hanson (2016–2018), Distinguished Professor at the Rochester Institute of Technology and visiting professor at the University of Dundee; Alexander L. Wolf (2014–2016), Dean of the Jack Baskin School of Engineering at the University of California, Santa Cruz; Vint Cerf (2012–2014), American computer scientist and Internet pioneer; Alain Chesnais (2010–2012); and Dame Wendy Hall of the University of Southampton, UK (2008–2010). ACM is led by a council consisting of the president, vice-president, treasurer, past president, SIG Governing Board Chair, Publications Board Chair, three representatives of the SIG Governing Board, and seven Members-At-Large. This institution is often referred to simply as "Council" in Communications of the ACM. Infrastructure ACM has numerous boards, committees, and task forces which run the organization: ACM Council ACM Executive Committee Digital Library Board Education Board Practitioner Board Publications Board SIG Governing Board DEI Council ACM Technology Policy Council ACM Representatives to Other Organizations Computer Science Teachers Association ACM Council on Women in Computing ACM-W, the ACM council on women in computing, supports, celebrates, and advocates internationally for the full engagement of women in computing. ACM–W's main programs are regional celebrations of women in computing, ACM-W chapters, and scholarships for women CS students to attend research conferences. In India and Europe these activities are overseen by ACM-W India and ACM-W Europe respectively. ACM-W collaborates with organizations such as the Anita Borg Institute, the National Center for Women & Information Technology (NCWIT), and Committee on the Status of Women in Computing Research (CRA-W). The ACM-W gives an annual Athena Lecturer Award to honor outstanding women researchers who have made fundamental contributions to computer science. This program began in 2006. Speakers are nominated by SIG officers. Partner organizations ACM's primary partner has been the IEEE Computer Society (IEEE-CS), which is the largest subgroup of the Institute of Electrical and Electronics Engineers (IEEE). The IEEE focuses more on hardware and standardization issues than theoretical computer science, but there is considerable overlap with ACM's agenda. They have many joint activities including conferences, publications and awards. ACM and its SIGs co-sponsor about 20 conferences each year with IEEE-CS and other parts of IEEE. Eckert-Mauchly Award and Ken Kennedy Award, both major awards in computer science, are given jointly by ACM and the IEEE-CS. They occasionally cooperate on projects like developing computing curricula. ACM has also jointly sponsored on events with other professional organizations like the Society for Industrial and Applied Mathematics (SIAM). Criticism In December 2019, the ACM co-signed a letter with over one hundred other publishers to President Donald Trump saying that an open access mandate would increase costs to taxpayers or researchers and hurt intellectual property. This was in response to rumors that he was considering issuing an executive order that would require federally funded research be made freely available online immediately after being published. It is unclear how these rumors started. Many ACM members opposed the letter, leading ACM to issue a statement clarifying that they remained committed to open access, and they wanted to see communication with stakeholders about the potential mandate. The statement did not significantly assuage criticism from ACM members. The SoCG conference, while originally an ACM conference, parted ways with ACM in 2014 because of problems when organizing conferences abroad. See also ACM Classification Scheme Franz Alt, former president Edmund Berkeley, co-founder Computer science Computing Bernard Galler, former president Fellows of the ACM (by year) Fellows of the ACM (category) Grace Murray Hopper Award Presidents of the Association for Computing Machinery Timeline of computing hardware before 1950 Turing Award List of academic databases and search engines References External links ACM portal for publications ACM Digital Library Association for Computing Machinery Records, 1947-2009, Charles Babbage Institute, University of Minnesota. ACM Upsilon Phi Epsilon honor society 1947 establishments in the United States Computer science-related professional associations International learned societies Organizations established in 1947 501(c)(3) organizations
2940
https://en.wikipedia.org/wiki/And%20did%20those%20feet%20in%20ancient%20time
And did those feet in ancient time
"And did those feet in ancient time" is a poem by William Blake from the preface to his epic Milton: A Poem in Two Books, one of a collection of writings known as the Prophetic Books. The date of 1804 on the title page is probably when the plates were begun, but the poem was printed . Today it is best known as the hymn "Jerusalem", with music written by Sir Hubert Parry in 1916. The famous orchestration was written by Sir Edward Elgar. It is not to be confused with another poem, much longer and larger in scope and also by Blake, called Jerusalem The Emanation of the Giant Albion. It is often assumed that the poem was inspired by the apocryphal story that a young Jesus, accompanied by Joseph of Arimathea, a tin merchant, travelled to what is now England and visited Glastonbury during his unknown years. However, according to British folklore scholar A. W. Smith, "there was little reason to believe that an oral tradition concerning a visit made by Jesus to Britain existed before the early part of the twentieth century". Instead, the poem draws on an older story, repeated in Milton's History of Britain, that Joseph of Arimathea, alone, travelled to preach to the ancient Britons after the death of Jesus. The poem's theme is linked to the Book of Revelation (3:12 and 21:2) describing a Second Coming, wherein Jesus establishes a New Jerusalem. Churches in general, and the Church of England in particular, have long used Jerusalem as a metaphor for Heaven, a place of universal love and peace. In the most common interpretation of the poem, Blake asks whether a visit by Jesus briefly created heaven in England, in contrast to the "dark Satanic Mills" of the Industrial Revolution. Blake's poem asks four questions rather than asserting the historical truth of Christ's visit. The second verse is interpreted as an exhortation to create an ideal society in England, whether or not there was a divine visit. Text The original text is found in the preface Blake wrote for inclusion with Milton, a Poem, following the lines beginning "The Stolen and Perverted Writings of Homer & Ovid: of Plato & Cicero, which all Men ought to contemn: ..." Blake's poem Beneath the poem Blake inscribed a quotation from the Bible: "Dark Satanic Mills" The phrase "dark Satanic Mills", which entered the English language from this poem, is often interpreted as referring to the early Industrial Revolution and its destruction of nature and human relationships. That view has been linked to the fate of the Albion Flour Mills in Southwark, the first major factory in London. The rotary steam-powered flour mill, built by Matthew Boulton, assisted by James Watt, could produce 6,000 bushels of flour per week. The factory could have driven independent traditional millers out of business, but it was destroyed in 1791 by fire. There were rumours of arson, but the most likely cause was a bearing that overheated due to poor maintenance. London's independent millers celebrated, with placards reading, "Success to the mills of Albion but no Albion Mills." Opponents referred to the factory as satanic, and accused its owners of adulterating flour and using cheap imports at the expense of British producers. A contemporary illustration of the fire shows a devil squatting on the building. The mill was a short distance from Blake's home. Blake's phrase resonates with a broader theme in his works; what he envisioned as a physically and spiritually repressive ideology based on a quantified reality. Blake saw the cotton mills and collieries of the period as a mechanism for the enslavement of millions, but the concepts underpinning the works had a wider application: Another interpretation is that the phrase refers to the established Church of England, which, in contrast to Blake, preached a doctrine of conformity to the established social order and class system. Stonehenge and other megaliths are featured in Milton, suggesting they may relate to the oppressive power of priestcraft in general. Peter Porter observed that many scholars argue that the "[mills] are churches and not the factories of the Industrial Revolution everyone else takes them for". In 2007, the Bishop of Durham, N. T. Wright, explicitly recognised that element of English subculture when he acknowledged the view that "dark satanic mills" could refer to the "great churches". In similar vein, the critic F. W. Bateson noted how "the adoption by the Churches and women's organizations of this anti-clerical paean of free love is amusing evidence of the carelessness with which poetry is read". An alternative theory is that Blake is referring to a mystical concept within his own mythology, related to the ancient history of England. Satan's "mills" are referred to repeatedly in the main poem, and are first described in words which suggest neither industrialism nor ancient megaliths, but rather something more abstract: "the starry Mills of Satan/ Are built beneath the earth and waters of the Mundane Shell...To Mortals thy Mills seem everything, and the Harrow of Shaddai / A scheme of human conduct invisible and incomprehensible". "Chariots of fire" The line from the poem "Bring me my Chariot of fire!" draws on the story of 2 Kings 2:11, where the Old Testament prophet Elijah is taken directly to heaven: "And it came to pass, as they still went on, and talked, that, behold, there appeared a chariot of fire, and horses of fire, and parted them both asunder; and Elijah went up by a whirlwind into heaven." The phrase has become a byword for divine energy, and inspired the title of the 1981 film Chariots of Fire, in which the hymn "Jerusalem" is sung during the final scenes. The plural phrase "chariots of fire" refers to 2 Kings 6:17. "Green and pleasant land" Blake lived in London for most of his life, but wrote much of Milton while living in a cottage, now Blake’s Cottage, in the village of Felpham in Sussex. Amanda Gilroy argues that the poem is informed by Blake's "evident pleasure" in the Felpham countryside. However, local people say that records from Lavant, near Chichester, state that Blake wrote "And did those feet in ancient time" in an east-facing alcove of the Earl of March public house. The phrase "green and pleasant land" has become a common term for an identifiably English landscape or society. It appears as a headline, title or sub-title in numerous articles and books. Sometimes it refers, whether with appreciation, nostalgia or critical analysis, to idyllic or enigmatic aspects of the English countryside. In other contexts it can suggest the perceived habits and aspirations of rural middle-class life. Sometimes it is used ironically, e.g. in the Dire Straits song "Iron Hand". Revolution Several of Blake's poems and paintings express a notion of universal humanity: "As all men are alike (tho' infinitely various)". He retained an active interest in social and political events for all his life, but was often forced to resort to cloaking social idealism and political statements in Protestant mystical allegory. Even though the poem was written during the Napoleonic Wars, Blake was an outspoken supporter of the French Revolution, and Napoleon claimed to be continuing this revolution. The poem expressed his desire for radical change without overt sedition. In 1803 Blake was charged at Chichester with high treason for having "uttered seditious and treasonable expressions", but was acquitted. The trial was not a direct result of anything he had written, but comments he had made in conversation, including "Damn the King!". The poem is followed in the preface by a quotation from Numbers 11:29: "Would to God that all the Lords people were prophets." Christopher Rowland has argued that this includes everyone in the task of speaking out about what they saw. Prophecy for Blake, however, was not a prediction of the end of the world, but telling the truth as best a person can about what he or she sees, fortified by insight and an "honest persuasion" that with personal struggle, things could be improved. A human being observes, is indignant and speaks out: it's a basic political maxim which is necessary for any age. Blake wanted to stir people from their intellectual slumbers, and the daily grind of their toil, to see that they were captivated in the grip of a culture which kept them thinking in ways which served the interests of the powerful. The words of the poem "stress the importance of people taking responsibility for change and building a better society 'in Englands green and pleasant land.' " Popularisation The poem, which was little known during the century which followed its writing, was included in the patriotic anthology of verse The Spirit of Man, edited by the Poet Laureate of the United Kingdom, Robert Bridges, and published in 1916, at a time when morale had begun to decline because of the high number of casualties in World War I and the perception that there was no end in sight. Under these circumstances, Bridges, finding the poem an appropriate hymn text to "brace the spirit of the nation [to] accept with cheerfulness all the sacrifices necessary," asked Sir Hubert Parry to put it to music for a Fight for Right campaign meeting in London's Queen's Hall. Bridges asked Parry to supply "suitable, simple music to Blake's stanzas – music that an audience could take up and join in", and added that, if Parry could not do it himself, he might delegate the task to George Butterworth. The poem's idealistic theme or subtext accounts for its popularity across much of the political spectrum. It was used as a campaign slogan by the Labour Party in the 1945 general election; Clement Attlee said they would build "a new Jerusalem". It has been sung at conferences of the Conservative Party, at the Glee Club of the British Liberal Assembly, the Labour Party and by the Liberal Democrats. Setting to music By Hubert Parry In adapting Blake's poem as a unison song, Parry deployed a two-stanza format, each taking up eight lines of Blake's original poem. He added a four-bar musical introduction to each verse and a coda, echoing melodic motifs of the song. The word "those" was substituted for "these" before "dark satanic mills". Parry was initially reluctant to supply music for the campaign meeting, as he had doubts about the ultra-patriotism of Fight for Right; but knowing that his former student Walford Davies was to conduct the performance, and not wanting to disappoint either Robert Bridges or Davies, he agreed, writing it on 10 March 1916, and handing the manuscript to Davies with the comment, "Here's a tune for you, old chap. Do what you like with it." Davies later recalled, Davies arranged for the vocal score to be published by Curwen in time for the concert at the Queen's Hall on 28 March and began rehearsing it. It was a success and was taken up generally. But Parry began to have misgivings again about Fight for Right, and in May 1917 wrote to the organisation's founder Sir Francis Younghusband withdrawing his support entirely. There was even concern that the composer might withdraw the song from all public use, but the situation was saved by Millicent Fawcett of the National Union of Women's Suffrage Societies (NUWSS). The song had been taken up by the Suffragists in 1917 and Fawcett asked Parry if it might be used at a Suffrage Demonstration Concert on 13 March 1918. Parry was delighted and orchestrated the piece for the concert (it had originally been for voices and organ). After the concert, Fawcett asked the composer if it might become the Women Voters' Hymn. Parry wrote back, "I wish indeed it might become the Women Voters' hymn, as you suggest. People seem to enjoy singing it. And having the vote ought to diffuse a good deal of joy too. So they would combine happily". Accordingly, he assigned the copyright to the NUWSS. When that organisation was wound up in 1928, Parry's executors reassigned the copyright to the Women's Institutes, where it remained until it entered the public domain in 1968. The song was first called "And Did Those Feet in Ancient Time" and the early scores have this title. The change to "Jerusalem" seems to have been made about the time of the 1918 Suffrage Demonstration Concert, perhaps when the orchestral score was published (Parry's manuscript of the orchestral score has the old title crossed out and "Jerusalem" inserted in a different hand). However, Parry always referred to it by its first title. He had originally intended the first verse to be sung by a solo female voice (this is marked in the score), but this is rare in contemporary performances. Sir Edward Elgar re-scored the work for very large orchestra in 1922 for use at the Leeds Festival. Elgar's orchestration has overshadowed Parry's own, primarily because it is the version usually used now for the Last Night of the Proms (though Sir Malcolm Sargent, who introduced it to that event in the 1950s, always used Parry's version). By Wallen In 2020 a new musical arrangement of the poem by Errollyn Wallen, a British composer born in Belize, was sung by South African soprano Golda Schultz at the Last Night of the Proms. Parry's version was traditionally sung at the Last Night, with Elgar's orchestration; the new version, with different rhythms, dissonance, and reference to the blues, caused much controversy. While the song was often considered to be patriotic, in reality Jerusalem has always been an anti-establishment tract. Use as a hymn Although Parry composed the music as a unison song, many churches have adopted "Jerusalem" as a four-part hymn; a number of English entities, including the BBC, the Crown, cathedrals, churches, and chapels regularly use it as an office or recessional hymn on Saint George's Day. However, some clergy in the Church of England, according to the BBC TV programme Jerusalem: An Anthem for England, have said that the song is not technically a hymn as it is not a prayer to God; consequently, it is not sung in some churches in England. It was sung as a hymn during the wedding of Prince William and Catherine Middleton in Westminster Abbey. Many schools use the song, especially public schools in Great Britain (it was used as the title music for the BBC's 1979 series Public School about Radley College), and several private schools in Australia, New Zealand, New England and Canada. In Hong Kong, diverted version of "Jerusalem" is also used as the school hymn of St. Catherine's School for Girls, Kwun Tong and Bishop Hall Jubilee School. "Jerusalem" was chosen as the opening hymn for the London Olympics 2012, although "God Save the Queen" was the anthem sung during the raising of the flag in salute to the Queen. Some attempts have also been made to increase its use elsewhere with other words; examples include the state funeral of President Ronald Reagan in Washington National Cathedral on 11 June 2004, and the state memorial service for Australian Prime Minister Gough Whitlam on 5 November 2014. It has been sung on BBC's Songs Of Praise for many years; in a countrywide poll to find the UK's favourite hymn in 2019, it was voted top, relegating previous favourite "How Great Thou Art" into second place. Proposal as English anthem Upon hearing the orchestral version for the first time, King George V said that he preferred "Jerusalem" over the British national anthem "God Save the King". "Jerusalem" is considered to be England's most popular patriotic song; The New York Times said it was "fast becoming an alternative national anthem," and there have been calls to give it official status. England has no official anthem and uses the British national anthem "God Save the King", also unofficial, for some national occasions, such as before English international football matches. However, some sports, including rugby league, use "Jerusalem" as the English anthem. "Jerusalem" is the official hymn of the England and Wales Cricket Board, although "God Save the Queen" has been sung before England's games on several occasions, including the 2010 ICC World Twenty20, the 2010–11 Ashes series and the 2019 ICC Cricket World Cup. Questions in Parliament have not clarified the situation, as answers from the relevant minister say that since there is no official national anthem, each sport must make its own decision. As Parliament has not clarified the situation, Team England, the English Commonwealth team, held a public poll in 2010 to decide which anthem should be played at medal ceremonies to celebrate an English win at the Commonwealth Games. "Jerusalem" was selected by 52% of voters over "Land of Hope and Glory" (used since 1930) and "God Save the Queen". In 2005 BBC Four produced Jerusalem: An Anthem For England highlighting the usages of the song/poem and a case was made for its adoption as the national anthem of England. Varied contributions come from Howard Goodall, Billy Bragg, Garry Bushell, Lord Hattersley, Ann Widdecombe and David Mellor, war proponents, war opponents, suffragettes, trade unionists, public schoolboys, the Conservatives, the Labour Party, football supporters, the British National Party, the Women's Institute, London Gay Men's Chorus, London Community Gospel Choir, Fat Les and naturists. Cultural significance Enduring popularity The popularity of Parry's setting has resulted in many hundreds of recordings being made, too numerous to list, of both traditional choral performances and new interpretations by popular music artists. The song has also had a large cultural impact in Great Britain. It is sung every year by an audience of thousands at the end of the Last Night of the Proms in the Royal Albert Hall and simultaneously in the Proms in the Park venues throughout the country. Similarly, along with "The Red Flag", it is sung each year at the closing of the annual Labour Party conference. The song was used by the National Union of Women's Suffrage Societies (indeed Parry transferred the copyright to the NUWSS in 1918; the Union was wound up in 1928 after women won the right to vote). During the 1920s many Women's Institutes (WI) started closing meetings by singing it, and this caught on nationally. Although it was never adopted as the WI's official anthem, in practice it holds that position, and is an enduring element of the public image of the WI. A rendition of "Jerusalem" was included in the 1973 album Brain Salad Surgery by the progressive rock group Emerson, Lake & Palmer. The arrangement of the hymn is notable for its use of the first polyphonic synthesizer, the Moog Apollo. It was released as a single, but failed to chart in the United Kingdom. An instrumental rendition of the hymn was included in the 1989 album "The Amsterdam EP" by Scottish rock band Simple Minds. "Jerusalem" is traditionally sung before rugby league's Challenge Cup Final, along with "Abide with Me", and before the Super League Grand Final, where it is introduced as "the rugby league anthem". Before 2008, it was the anthem used by the national side, as "God Save the Queen" was used by the Great Britain team: since the Lions were superseded by England, "God Save the Queen" has replaced "Jerusalem". Since 2004, it has been the anthem of the England cricket team, being played before each day of their home test matches. It was also used in the opening ceremony of the 2012 Summer Olympics held in London and inspired several of the opening show segments directed by Danny Boyle. It was included in the ceremony's soundtrack album, Isles of Wonder. Use in film, television and theatre "Bring me my Chariot of fire" inspired the title of the film Chariots of Fire. A church congregation sings "Jerusalem" at the close of the film and a performance appears on the Chariots of Fire soundtrack performed by the Ambrosian Singers overlaid partly by a composition by Vangelis. One unexpected touch is that "Jerusalem" is sung in four-part harmony, as if it were truly a hymn. This is not authentic: Parry's composition was a unison song (that is, all voices sing the tune – perhaps one of the things that make it so "singable" by massed crowds) and he never provided any harmonisation other than the accompaniment for organ (or orchestra). Neither does it appear in any standard hymn book in a guise other than Parry's own, so it may have been harmonised specially for the film. The film's working title was "Running" until Colin Welland saw a television programme, Songs of Praise, featuring the hymn and decided to change the title. The hymn has featured in many other films and television programmes including Four Weddings and a Funeral, How to Get Ahead in Advertising, The Loneliness of the Long Distance Runner, Saint Jack, Calendar Girls, Season 3: Episode 22 of Star Trek: Deep Space Nine, Goodnight Mr. Tom, Women in Love, The Man Who Fell to Earth, Shameless, Jackboots on Whitehall, Quatermass and the Pit, Monty Python's Flying Circus, and Collateral (UK TV series). An extract was heard in the 2013 Doctor Who episode "The Crimson Horror" although that story was set in 1893, i.e., before Parry's arrangement. A bawdy version of the first verse is sung by Mr Partridge in the third episode of Season 1 of Hi-de-Hi. A punk version is heard in Derek Jarman's 1977 film Jubilee. In an episode of Peep Show, Jez (Robert Webb) records a track titled "This Is Outrageous" which uses the first and a version of the second line in a verse. A modified version of the hymn, replacing the word "England" with "Neo", is used in Neo Yokio as the national anthem of the eponymous city state. In the theatre it appears in Jerusalem, Calendar Girls and in Time and the Conways. See also Civil religion Romanticism and the Industrial Revolution Notes References External links Comparisons of the Hand Painted copies of the Preface on the William Blake Archive And did those feet in ancient time at Hymnary.org (Multiple versions) 1804 poems 1916 songs English Christian hymns English patriotic songs National symbols of England Poetry by William Blake British Israelism Musical settings of poems by William Blake British anthems Joseph of Arimathea Hymns in The New English Hymnal
2943
https://en.wikipedia.org/wiki/Dual%20wield
Dual wield
Dual wielding is the technique of using two weapons, one in each hand for training or combat. It is not a common combat practice. Although historical records of dual wielding in war are limited, there are numerous weapon-based martial arts that involve the use of a pair of weapons. The use of a companion weapon is sometimes employed in European martial arts and fencing, such as a parrying dagger. Miyamoto Musashi, a Japanese swordsman and ronin, was said to have conceived of the idea of a particular style of swordsmanship involving the use of two swords. In terms of firearms, especially handguns, dual wielding is generally denounced by firearm enthusiasts due to its impracticality. Though using two handguns at the same time confers an advantage by allowing more ready ammunition, it is rarely done due to other aspects of weapons handling. Dual wielding, both with melee and ranged weapons, has been popularized by fictional works (film, television, and video games). History Dual wielding has not been used or mentioned much in military history, though it appears in weapon-based martial arts and fencing practices. Dimachaerus were a type of Roman gladiator that fought with two swords. The name is the Latin-language borrowing of the Greek word meaning "bearing two knives" (di- dual + machairi knife) Thus, an inscription from Lyon, France, mentions such a type of gladiator, here spelled dymacherus. The dimachaeri were equipped for close-combat fighting. A dimachaerus used a pair of siccae (curved scimitar) or gladius and used a fighting style adapted to both attack and defend with his weapons rather than a shield, as he was not equipped with one. The use of weapon combinations in each hand has been mentioned for close combat in western Europe during the Byzantine, Medieval, and Renaissance era. The use of a parrying dagger such as a main gauche along with a rapier is common in historical European martial arts. North American Indian tribes of the Atlantic northeast used a form involving a tomahawk in the primary hand and a knife in the secondary. It is practiced today as part of the modern Cree martial art Okichitaw. All the above-mentioned examples, involve either one long and one short weapon, or two short weapons. An example of a dual wield of two sabres is the Ukrainian cossack dance hopak. Asia During the campaign Muslim conquest in 6th to 7th century AD, a Rashidun caliphate general named Khalid ibn Walid was reported to favor wielding two broad swords, with one in each hand, during combat. Traditional schools of Japanese martial arts include dual wield techniques, particularly a style conceived by Miyamoto Musashi involving the katana and wakizashi, two-sword kenjutsu techniques he called Niten Ichi-ryū. Eskrima, the traditional martial arts of the Philippines teaches Doble Baston techniques involving the basic use of a pair of rattan sticks and also Espada y daga or Sword/Stick and Dagger. Okinawan martial arts have a method that uses a pair of sai. Chinese martial arts involve the use of a pair of butterfly swords and hook swords. Famed for his enormous strength, Dian Wei, a military general serving under the warlord Cao Cao in the late Eastern Han dynasty of China, excelled at wielding a pair of ji (a halberd-like weapon), each of which was said to weigh 40 jin. During Wei–Jie war, Ran Min, emperor of the short-lived Ran Wei empire of China, wielded two weapons, one in each hand, and fought fiercely, inflicting many casualties on the Xianbei soldiers while mounted on the famous horse Zhu Long ("Red Dragon"). Gatka, a weapon-based martial art from the Punjab region, is known to use two sticks at a time. The Thailand weapon-based martial art Krabi Krabong involves the use of a separate Krabi in each hand. Kalaripayattu teaches advanced students to use either two sticks (of various sizes) or two daggers or two swords, simultaneously. Modern The use of a gun in each hand is often associated with the American Old West, mainly due to media portrayals. It was common for people in the era to carry two guns, but not to use them at the same time, as shown in movies. The second gun served as a backup weapon, to be used only if the main one suffered a malfunction or was lost or emptied. However, there were several examples of gunmen in the West who actually used two pistols at the same time in their gunfights: John Wesley Hardin killed a gunman named Benjamin Bradley who shot at him, by drawing both of his pistols and firing back. The Mexican vaquero Augustine Chacon had several gunfights in which he was outnumbered by more than one gunman and prevailed by equipping himself with a revolver in each hand. King Fisher once managed to kill three bandits in a shootout by pulling both of his pistols. During the infamous Four Dead in Five Seconds Gunfight, lawman Dallas Stoudenmire pulled both of his pistols as he ran out onto the street and killed one bystander and two other gunmen. Jonathan R. Davis, a prospector during the California Gold Rush, was ambushed by thirteen outlaws while together with two of his comrades. One of his friends was killed and the other was mortally wounded during the ambush. Davis drew both of his revolvers and fired, killing seven of the bandits, and killing four more with his bowie knife, causing the final two to flee. Dual wielding two handguns has been popularized by film and television. Effectiveness MythBusters compared many firing stances, including having a gun in each hand and found that, compared to the two-handed single-gun stance as a benchmark, only the one-handed shoulder-level stance with a single gun was comparable in terms of accuracy and speed. The ability to look down the sights of the gun was given as the main reason for this. In an episode the following year, they compared holding two guns and firing simultaneously—rather than alternating left and right shots—with holding one gun in the two-handed stance, and found that the results were in favor of using two guns and firing simultaneously. In media The Teenage Mutant Ninja Turtles features dual wielding being done by Leonardo with two katana swords, Raphael with two sais, and Michelangelo with two nunchucks. Sometimes, their arch enemy known as the Shredder dual wields with many weapons. Princess Mononoke features Lady Eboshi dual wielding with a katana sword and a hairpin. The Marvel Comics features dual wielding being done by Deadpool with two katana swords, Nightcrawler with two sabres, Elektra with two sais, and Black Widow with two pistols and two batons. The DC Comics features Dick Grayson and Barbara Gordon dual wielding two bastons. The Star Wars franchise features many characters dual wielding two lightsabres or more including Anakin Skywalker, Ahsoka Tano, and General Grievous. Star Wars: The Clone Wars features Palpatine and his former apprentice, Darth Maul, dual wielding two lightsabres each. The Halo franchise allows you to have two select weapons in Halo 2 and Halo 3. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe features the noble centaur general Oreius dual wielding two longswords, and also the oppressive White Witch doing the same. It also features the Minotaur general Otmin dual wielding a falchion sword and a battle axe. Ip Man 3 features butterfly swords being dual wielded by Ip Man and Cheung Tin-chi. The Hobbit and The Lord of the Rings features the virtuous wizard Gandalf dual wielding a magic staff and a mystic longsword. The Mummy Returns features the adventurous Egyptologist Evelyn O'Connell and the treacherous Anck-su-namun dual wielding two sais. The Pirates of the Caribbean features characters dual wielding two swords including Jack Sparrow, Will Turner, and Elizabeth Swann. The martial arts movie Crouching Tiger, Hidden Dragon features Michelle Yeoh as Yu Shu Lien dual wielding with a dao sword which split to two, and then with two hook swords. The Three Musketeers features many characters dual fighting with rapiers and daggers. Mighty Morphin Power Rangers features Tommy Oliver dual wielding a sword and a dagger. Robin of Sherwood features Nasir, a Saracen assassin who dual wields two scimitars. Avatar: The Legend of Aang features dual wielding done by Zuko with two dao swords, Jet with two hook swords, Suki with two war fans, and Sokka with a machete along a club or a boomerang. The Transformers features dual wielding being done by many characters including Optimus Prime and Optimus Primal with two swords. Kung Fu Hustle features iron rings being dual wielded by the humble tailor of Pigsty Alley. Power Rangers: Jungle Fury features dual wielding being done by Casey Rhodes with two nunchakus and also two dao-themed Shark Sabres, Theo Martin with two tonfas and then two tessan-themed Jungle Fans, and Camille with two sais. In the Marvel Cinematic Universe martial arts film Shang Chi and the Legend of the Ten Rings features the Ten Rings be dual wielded by Wenwu, the MCU version of the Mandarin, and then by Shang Chi, his son. The musical version of The Lion King features Mufasa and his son Simba dual wielding two akrafena swords to fight. Lara Croft, the heroine of the Tomb Raider'' franchise dual wields two pistols. See also Dimachaerus Gun fu Swordsmanship References Combat Video game terminology
2948
https://en.wikipedia.org/wiki/Agner%20Krarup%20Erlang
Agner Krarup Erlang
Agner Krarup Erlang (1 January 1878 – 3 February 1929) was a Danish mathematician, statistician and engineer, who invented the fields of traffic engineering and queueing theory. By the time of his relatively early death at the age of 51, Erlang had created the field of telephone networks analysis. His early work in scrutinizing the use of local, exchange and trunk telephone line usage in a small community to understand the theoretical requirements of an efficient network led to the creation of the Erlang formula, which became a foundational element of modern telecommunication network studies. Life Erlang was born at Lønborg, near Tarm, in Jutland. He was the son of a schoolmaster, and a descendant of Thomas Fincke on his mother's side. At age 14, he passed the Preliminary Examination of the University of Copenhagen with distinction, after receiving dispensation to take it because he was younger than the usual minimum age. For the next two years he taught alongside his father. A distant relative provided free board and lodging, and Erlang prepared for and took the University of Copenhagen entrance examination in 1896, and passed with distinction. He won a scholarship to the University and majored in mathematics, and also studied astronomy, physics and chemistry. He graduated in 1901 with an MA and over the next 7 years taught at several schools. He maintained his interest in mathematics, and received an award for a paper that he submitted to the University of Copenhagen. He was a member of the Danish Mathematicians' Association (DMF) and through this met amateur mathematician Johan Jensen, the Chief Engineer of the Copenhagen Telephone Company (KTAS in Danish), an offshoot of the International Bell Telephone Company. Erlang worked for the Copenhagen Telephone Company from 1908 for almost 20 years, until his death in Copenhagen after an abdominal operation. He was an associate of the British Institution of Electrical Engineers. Contributions While working for the CTC, Erlang was presented with the classic problem of determining how many circuits were needed to provide an acceptable telephone service. His thinking went further by finding how many telephone operators were needed to handle a given volume of calls. Most telephone exchanges then used human operators and cord boards to switch telephone calls by means of jack plugs. Out of necessity, Erlang was a hands-on researcher. He would conduct measurements and was prepared to climb into street manholes to do so. He was also an expert in the history and calculation of the numerical tables of mathematical functions, particularly logarithms. He devised new calculation methods for certain forms of tables. He developed his theory of telephone traffic over several years. His significant publications include: 1909 – "The Theory of Probabilities and Telephone Conversations", which proves that the Poisson distribution applies to random telephone traffic. 1917 – "Solution of some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges", which contains his classic formulae for call loss and waiting time. 1920 - "Telephone waiting times", which is Erlang's principal work on waiting times, assuming constant holding times. These and other notable papers were translated into English, French and German. His papers were prepared in a very brief style and can be difficult to understand without a background in the field. One Bell Telephone Laboratories researcher is said to have learned Danish to study them. The British Post Office accepted his formula as the basis for calculating circuit facilities. In 1946, the CCITT named the international unit of telephone traffic the "erlang". A statistical distribution and programming language listed below have also been named in his honour. Erlang also made an important contribution to physiologic modeling with the Krogh-Erlang capillary cylinder model describing oxygen supply to living tissue. See also Erlang – a unit of communication activity Erlang distribution – a statistical probability distribution Erlang programming language – developed by Ericsson for large industrial real-time systems Queueing theory Teletraffic engineering References 20th-century Danish mathematicians 20th-century Danish engineers Electrical engineers Queueing theorists Danish statisticians Danish business theorists 1878 births 1929 deaths People from Ringkøbing-Skjern Municipality Danish civil engineers University of Copenhagen alumni